Many people reported that the lora training sucks for z-image base. Less than 12 hours ago, someone on Bilibili claimed that he/she found the cause – unit 8 used by AdamW8bit optimizer. According to the author, you have to use FP8 optimizer for z-image base. The author pasted some comparisons in his/her post. One can check check https://b23.tv/g7gUFIZ for more info.
submitted by /u/Recent-Source-7777
[link] [comments]
UPDATED Flux2Klein Ksampler has been added to the repo : here Sample Workflow: here ------------------------------------------------------…
Meta is unquestionably winning the face-wearable war. Can you trust the company? Maybe not. But…
A humanoid robot that won a half-marathon race for robots in Beijing on Sunday ran…
This model was trained on 8,000 video pairs, and training is still ongoing for a…
These locks, lights, and other smart home upgrades let you add automation without messing up…
Engineers at Northwestern University have taken a striking leap toward merging machines with the human…