Many people reported that the lora training sucks for z-image base. Less than 12 hours ago, someone on Bilibili claimed that he/she found the cause – unit 8 used by AdamW8bit optimizer. According to the author, you have to use FP8 optimizer for z-image base. The author pasted some comparisons in his/her post. One can check check https://b23.tv/g7gUFIZ for more info.
submitted by /u/Recent-Source-7777
[link] [comments]
Built an open source LoRA for virtual clothing try-on on top of Flux Klein 9b…
AI deployment is changing.
Large Language Models (LLMs) can be adapted to extend their text capabilities to speech inputs.…
Managing large photo collections presents significant challenges for organizations and individuals. Traditional approaches rely on…
The US Justice Department disclosures give fresh clues about how tech companies handle government inquiries…
When a human says an event is "probable" or "likely," people generally have a shared,…