Anima 2B – Style Explorer: Visual database of 900+ Danbooru artists. Live website in comments!
submitted by /u/ThetaCursed [link] [comments]
Category Added in a WPeMatico Campaign
submitted by /u/ThetaCursed [link] [comments]
made a short video with LTX-2 using an iCloRA Flow to recreate a Space Jam scene, but swap Michael Jordan with Deni Avdija. Flow (GitHub): https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/LTX-2_ICLoRA_All_Distilled.json My process: I generated an image of each shot that matches the original as closely as possible just replacing MJ with Deni. I loaded the original video in the …
Read more “Deni Avdija in Space Jam with LTX-2 I2V + iCloRA. Flow included”
Just a few samples from a lora trained using Z image base. First 4 pictures are generated using Z image turbo and the last 3 are using Z image base + 8 step distilled lora Lora is trained using almost 15000 images using ai toolkit (here is the config: https://www.reddit.com/r/StableDiffusion/comments/1qshy5a/comment/o2xs8vt/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button ). And to my surprise …
Many people reported that the lora training sucks for z-image base. Less than 12 hours ago, someone on Bilibili claimed that he/she found the cause – unit 8 used by AdamW8bit optimizer. According to the author, you have to use FP8 optimizer for z-image base. The author pasted some comparisons in his/her post. One can …
submitted by /u/ShadowBoxingBabies [link] [comments]
ComfyUI-CacheDiT brings 1.4-1.6x speedup to DiT (Diffusion Transformer) models through intelligent residual caching, with zero configuration required. https://github.com/Jasonzzt/ComfyUI-CacheDiT https://github.com/vipshop/cache-dit https://cache-dit.readthedocs.io/en/latest/ “Properly configured (default settings), quality impact is minimal: Cache is only used when residuals are similar between steps Warmup phase (3 steps) establishes stable baseline Conservative skip intervals prevent artifacts” submitted by /u/Scriabinical [link] [comments]
I always see posts arguing wether ZIT or Klein have best realism, but I am always surprised when I don’t see mention Qwen-Image2512 or Wan2.2, which are still to this day my two favorite models for T2I and general refining. I always found QwenImage to respond insanely well to LoRAs, its a very underrated model …
Read more “Qwen-Image2512 is a severely underrated model (realism examples)”
submitted by /u/ZootAllures9111 [link] [comments]
We just shipped a new LTX-2 drop focused on one thing: making video generation easier to iterate on without killing VRAM, consistency, or sync. If you’ve been frustrated by LTX because prompt iteration was slow or outputs felt brittle, this update is aimed directly at that. Here’s the highlights, the full details are here. What’s …
Read more “End-of-January LTX-2 Drop: More Control, Faster Iteration”
submitted by /u/LucidFir [link] [comments]