Only the OGs remember this.
submitted by /u/Expensive_Estimate32 [link] [comments]
submitted by /u/Expensive_Estimate32 [link] [comments]
People like my img2img workflow so it wasn’t much work to adapt it to just be a headswap workflow for different uses and applications compared to full character transfer. Its very simple and very easy to use. Only 3 variables need changing for different effects. – Denoise up or down – CFG higher creates more …
Read more “Simple, Effective and Fast Z-Image Headswap for characters V1”
submitted by /u/ThetaCursed [link] [comments]
made a short video with LTX-2 using an iCloRA Flow to recreate a Space Jam scene, but swap Michael Jordan with Deni Avdija. Flow (GitHub): https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/LTX-2_ICLoRA_All_Distilled.json My process: I generated an image of each shot that matches the original as closely as possible just replacing MJ with Deni. I loaded the original video in the …
Read more “Deni Avdija in Space Jam with LTX-2 I2V + iCloRA. Flow included”
Just a few samples from a lora trained using Z image base. First 4 pictures are generated using Z image turbo and the last 3 are using Z image base + 8 step distilled lora Lora is trained using almost 15000 images using ai toolkit (here is the config: https://www.reddit.com/r/StableDiffusion/comments/1qshy5a/comment/o2xs8vt/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button ). And to my surprise …
Many people reported that the lora training sucks for z-image base. Less than 12 hours ago, someone on Bilibili claimed that he/she found the cause – unit 8 used by AdamW8bit optimizer. According to the author, you have to use FP8 optimizer for z-image base. The author pasted some comparisons in his/her post. One can …
submitted by /u/ShadowBoxingBabies [link] [comments]
ComfyUI-CacheDiT brings 1.4-1.6x speedup to DiT (Diffusion Transformer) models through intelligent residual caching, with zero configuration required. https://github.com/Jasonzzt/ComfyUI-CacheDiT https://github.com/vipshop/cache-dit https://cache-dit.readthedocs.io/en/latest/ “Properly configured (default settings), quality impact is minimal: Cache is only used when residuals are similar between steps Warmup phase (3 steps) establishes stable baseline Conservative skip intervals prevent artifacts” submitted by /u/Scriabinical [link] [comments]
I always see posts arguing wether ZIT or Klein have best realism, but I am always surprised when I don’t see mention Qwen-Image2512 or Wan2.2, which are still to this day my two favorite models for T2I and general refining. I always found QwenImage to respond insanely well to LoRAs, its a very underrated model …
Read more “Qwen-Image2512 is a severely underrated model (realism examples)”
submitted by /u/ZootAllures9111 [link] [comments]