Many people reported that the lora training sucks for z-image base. Less than 12 hours ago, someone on Bilibili claimed that he/she found the cause – unit 8 used by AdamW8bit optimizer. According to the author, you have to use FP8 optimizer for z-image base. The author pasted some comparisons in his/her post. One can check check https://b23.tv/g7gUFIZ for more info.
submitted by /u/Recent-Source-7777
[link] [comments]
https://huggingface.co/TenStrip/LTX2.3-10Eros_Workflows/tree/main ^ Link can be found here he did an Amazing job with this work…
Contact-tracing apps were widely deployed during the Covid pandemic. They aren’t as helpful during smaller…
Every image is made with Z-Image-Turbo (See links for loras and prompts) A few of…
Can’t hear what they’re saying? Now you can turn on the subtitles for real-life conversations.
I have built a pipeline based on the Flux.2-Klein-4B model that allows processing of a…
AI agents have evolved beyond passive chatbots.