| | 🧵Scaling up generative models is crucial to unlock new capabilities. But scaling down is equally necessary to democratize the end-to-end development of generative models. Excited to share our new work on scaling down diffusion generative models by drastically reducing the overhead of training them from scratch. Now anyone can train a stable-diffusion quality model from scratch in just $2,000 (2.6 training days on a single 8xH100 node). arxiv.org/abs/2407.15811 submitted by /u/1wndrla17 |
Base model is definitely SOTA, can even easily compete with closed-source ones in terms of…
Generative AI is reshaping how organizations approach productivity, customer experiences, and operational capabilities. Across industries,…
In many ways, the HP OmniBook 5 is a better budget laptop than the MacBook…
University of Washington researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless…
We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with…
The open-weights model ecosystem shifted recently with the release of the