They confirmed that the SDXL weights won’t be released, and they’re probably going to do the same for the training code as well:
https://github.com/TencentQQGYLab/ELLA/issues/16#issuecomment-2046795891
I’m not being a cynic, but I’ve been in academia long enough to know how people (usually early in career) tend to (heavy stress on “tend to”) value the publication itself rather attempting to make its results readily or even generally available. I’ll even go far enough to say that this implies “things” as well.
The results and descriptions in the paper quite exclusively involve SDXL, yet the weights released are SD1.5. Furthermore, training code itself hasn’t been released either and quite likely won’t ever see the light ofof day. There’s something about making a publicized and documented claim which couldn’t have been achieved without PoC/evidence, and then scaling back on said claims and deliverables.
I get the weights being “a piece of investment” that’s being out “for free”, but the training code itself? Maybe I’m being a cynic, but just thought I’d let you all know that LaviBridge/ELLA’s route for prompt-adherence in SDXL, is probably dead – better to look to SD3 now, I think.
submitted by /u/hexinx
[link] [comments]
Just a few samples from a lora trained using Z image base. First 4 pictures…
AI agents that use tools, make decisions, and complete multi-step tasks aren't prototypes anymore.
This is a guest post co-written with David Meredith and Josh Zacharias from Associa. Associa,…
At Google Cloud, we’re committed to providing customers with the leading selection of models to…
A team of geologists found for the first time evidence linking regions of low seismic…
In a typical online meeting, humans don't always wait politely for their turn to speak.…