Stability AI builds foundation models on Amazon SageMaker

We’re thrilled to announce that Stability AI has selected AWS as its preferred cloud provider to power its state-of-the-art AI models for image, language, audio, video, and 3D content generation. Stability AI is a community-driven, open-source artificial intelligence (AI) company developing breakthrough technologies. With Amazon SageMaker, Stability AI will build AI models on compute clusters with thousands of GPU or AWS Trainium chips, reducing training time and cost by 58%. Stability AI will also collaborate with AWS to enable students, researchers, startups, and enterprises around the world to use its open-source tools and models.

“Our mission at Stability AI is to build the foundation to activate humanity’s potential through AI. AWS has been an integral partner in scaling our open-source foundation models across modalities, and we are delighted to bring these to SageMaker to enable tens of thousands of developers and millions of users to take advantage of them. We look forward to seeing the amazing things built on these models and helping our customers customize and scale their models and solutions.”

-Emad Mostaque, Founder and CEO of Stability AI.

Generative AI models and Stable Diffusion

Generative AI models can create text, images, audio, video, code, and more from simple text instructions. For example, I created the following image by giving this text prompt to the model: “Four people riding a bicycle in the Swiss Alps, renaissance painting, epic breathtaking nature scene, diffused light.” I used a Jupyter notebook in Amazon SageMaker Studio to generate this image with Stable Diffusion.

Stability AI also announced a distilled stable diffusion model, which can generate coherent images up to ten times faster than before. This latest open-source release also introduces models to upscale an image’s resolution and infer depth information to generate new images. The following images show an example of how you can use the new depth2img model to generate new images while preserving the depth and coherence of the original image.

We’re excited by the potential of these generative AI models and by what our customers will create. From inpainting to textual inversion to modifiers, the community continues to innovate and build better open-source models and tools in generative AI.

Training foundation models at scale with SageMaker

Foundation models—large models that are adaptable to a variety of downstream tasks in domains such as language, image, audio, video—are hard to train because they require a high-performance compute cluster with thousands of GPU or Trainium chips, along with software to efficiently utilize the cluster.

Stability AI picked AWS as its preferred cloud provider to provision one of the largest-ever clusters of GPUs in the public cloud. Using SageMaker’s managed infrastructure and optimization libraries, Stability is able to make its model training more resilient and performant. For example, with models such as GPT NeoX, Stability AI was able to reduce training time and cost by 58% using SageMaker and its model parallel library. These optimizations and performance improvements apply to models with tens or hundreds of billions of parameters.

Get started with Stable Diffusion

Stable Diffusion 2.0 is available today on Amazon SageMaker JumpStart. JumpStart is the machine learning (ML) hub of SageMaker that provides hundreds of built-in algorithms, pre-trained models, and end-to-end solution templates to help you quickly get started with ML.

Get started today with Stable Diffusion 2.0.


About the authors

Aditya Bindal is a Principal Product Manager for AWS Deep Learning. He works on software and tools to make large-scale training and inference easier for customers. In his spare time, he enjoys spending time with his daughter, playing tennis, reading historical fiction, and traveling.