They confirmed that the SDXL weights won’t be released, and they’re probably going to do the same for the training code as well:
https://github.com/TencentQQGYLab/ELLA/issues/16#issuecomment-2046795891
I’m not being a cynic, but I’ve been in academia long enough to know how people (usually early in career) tend to (heavy stress on “tend to”) value the publication itself rather attempting to make its results readily or even generally available. I’ll even go far enough to say that this implies “things” as well.
The results and descriptions in the paper quite exclusively involve SDXL, yet the weights released are SD1.5. Furthermore, training code itself hasn’t been released either and quite likely won’t ever see the light ofof day. There’s something about making a publicized and documented claim which couldn’t have been achieved without PoC/evidence, and then scaling back on said claims and deliverables.
I get the weights being “a piece of investment” that’s being out “for free”, but the training code itself? Maybe I’m being a cynic, but just thought I’d let you all know that LaviBridge/ELLA’s route for prompt-adherence in SDXL, is probably dead – better to look to SD3 now, I think.
   submitted by    /u/hexinx  
 [link]   [comments]
submitted by /u/sakalond [link] [comments]
Large language models (LLMs) are not only good at understanding and generating text; they can…
The initiative brings together some of the world's most prestigious research institutions to pioneer the…
Current speech translation systems, while having achieved impressive accuracies, are rather static in their behavior…
The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary…
The second major cloud outage in less than two weeks, Azure’s downtime highlights the “brittleness”…