Categories: Image

EditAnything IC-LoRA – LTX-2.3

This model was trained on 8,000 video pairs, and training is still ongoing for a few thousand more steps. It is still experimental, not trained with a fully professional production target, and the model may be updated unexpectedly as new checkpoints.

The current goal is not final polished production quality, but to explore:

  • edit-anything behavior
  • prompt-following
  • inference tradeoffs
  • synthetic dataset building, especially for style data

The model was trained around four main prompt patterns:

Add
Add a/an [subject/object] with [clear visual attributes], [precise location in the scene].

Remove
Remove the [subject/object] [location or identifying description].

Replace
Replace the [original subject/object] [location] with a/an [new subject/object] with [clear visual attributes].

Convert / Style
Convert the video into a [style name] style.

Workflow URL: https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/workflows/ltx23_edit_anything_v1.json

Model URL: ltx23_edit_anything_global_rank128_v1_9000steps_adamw.safetensors · Alissonerdx/LTX-LoRAs at main

Or
CivitAI URL: EditAnything – v1.0 | LTX Video LoRA | Civitai

One important thing during inference is CFG.

A good starting point is testing a distilled setup with CFG = 1. If the edit feels too weak or the model is not following the prompt well enough, increasing CFG can be the key. In some cases, increasing the distill LoRA strength to around 1.2 can also help.

The workflow is also not fully optimized yet. It still needs more testing to find the best combination of:

  • CFG
  • LoRA strength
  • number of steps
  • model combinations

It may also be interesting to combine this model with other models and see what kinds of results emerge.

If you can test it, please share your findings. Feedback on prompt behavior, edit strength, consistency, style transfer, and failure cases would be very helpful while training is still in progress.

Add a small, brown dog dancing in the foreground next to the woman.

Convert the entire video to an anime style with vibrant colors and exaggerated character expressions.

Remove the blue car in the background of the scene.

Add a wide, genuine smile to the person’s face.

Replace the person’s clothing with a dark blue hoodie and gray sweatpants.

submitted by /u/Round_Awareness5490
[link] [comments]

AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content
Tags: ai images

Recent Posts

Flux.2-Klein pipeline for real-time webcam stream processing in 30 FPS

I have built a pipeline based on the Flux.2-Klein-4B model that allows processing of a…

19 hours ago

Implementing Permission-Gated Tool Calling in Python Agents

AI agents have evolved beyond passive chatbots.

19 hours ago

Adaptive Parallel Reasoning: The Next Paradigm in Efficient Inference Scaling

Overview of adaptive parallel reasoning. What if a reasoning model could decide for itself when…

19 hours ago

Scaling ArchUnit with Nebula ArchRules

By John Burns and Emily YuanIntroductionAt Netflix, we operate using a polyrepo strategy with tens of…

19 hours ago

Halliburton enhances seismic workflow creation with Amazon Bedrock and Generative AI

Seismic data analysis is an essential component of energy exploration, but configuring complex processing workflows…

19 hours ago

Top Megelin Deals for Laser and LED Therapy Devices (2026)

This Mother's Day, Megelin is slashing prices on its best-selling laser and LED devices.

20 hours ago