UQe4TjWEHE qkKC6KBiMe79MtxgJLQgPb2lnuUvNo4Q
| | TL;DR — I trained two LoRAs for Qwen-Image:
I’m still feeling out Qwen’s generation settings, so results aren’t peak yet. Updates are coming—stay tuned. I’m also planning an ultrareal full fine-tune (checkpoint) for Qwen next. P.S.: workflow in both HG repos submitted by /u/FortranUA |
Continuing the music video u/optimisoprimeo posted: https://www.reddit.com/r/StableDiffusion/comments/1t64gni/so_far_this_is_my_favorite_usecase_for_ltx/ submitted by /u/hidden2u [link] [comments]
One of the major differentiators unlocked by learned codecs relative to their hard-coded traditional counterparts…
As companies of various sizes adopt graphic processing units (GPU)-based machine learning (ML) training, fine-tuning…
Today, we’re thrilled to announce that Gemini 3.1 Flash-Lite, our fastest and most cost-efficient Gemini…
Leaders at the tech giant were skeptical of OpenAI—but wary of pushing it into the…