Why are we still training LoRA and not moved to DoRA as a standard?
Just wondering, this has been a head-scratcher for me for a while. Everywhere I look claims DoRA is superior to LoRA in what seems like all aspects. It doesn’t require more power or resources to train. I googled DoRA training for newer models – Wan, Qwen, etc. Didn’t find anything, except a reddit post from …
Read more “Why are we still training LoRA and not moved to DoRA as a standard?”