I always see posts arguing wether ZIT or Klein have best realism, but I am always surprised when I don’t see mention Qwen-Image2512 or Wan2.2, which are still to this day my two favorite models for T2I and general refining. I always found QwenImage to respond insanely well to LoRAs, its a very underrated model in general…
All the images in this post where made using Qwen-Image2512 (fp16/Q8) with the Lenovo LoRA on Civit by Danrisi with the RES4LYF nodes.
You can extract the wf for the first image by dragging this image into ComfyUI.
submitted by /u/000TSC000
[link] [comments]
ComfyUI-CacheDiT brings 1.4-1.6x speedup to DiT (Diffusion Transformer) models through intelligent residual caching, with zero…
The large language models (LLMs) hype wave shows no sign of fading anytime soon:…
This post was cowritten by Rishi Srivastava and Scott Reynolds from Clarus Care. Many healthcare…
Employee onboarding is rarely a linear process. It’s a complex web of dependencies that vary…
The latest batch of Jeffrey Epstein files shed light on the convicted sex offender’s ties…
A new light-based breakthrough could help quantum computers finally scale up. Stanford researchers created miniature…