Information taken from the GitHub page: https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD
HuggingFace checkpoints and diffusers integration: https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip
Public web-demo: https://clipdrop.co/stable-diffusion-reimagine
unCLIP is the approach behind OpenAI’s DALL·E 2, trained to invert CLIP image embeddings. We finetuned SD 2.1 to accept a CLIP ViT-L/14 image embedding in addition to the text encodings. This means that the model can be used to produce image variations, but can also be combined with a text-to-image embedding prior to yield a full text-to-image model at 768×768 resolution.
If you would like to try a demo of this model on the web, please visit https://clipdrop.co/stable-diffusion-reimagine
This model essentially uses an input image as the ‘prompt’ rather than require a text prompt. It does this by first converting the input image into a ‘CLIP embedding’, and then feeds this into a stable diffusion 2.1-768 model fine-tuned to produce an image from such CLIP embeddings, enabling a users to generate multiple variations of a single image this way. Note that this is distinct from how img2img does it (the structure of the original image is generally not kept).
Blog post: https://stability.ai/blog/stable-diffusion-reimagine
submitted by /u/hardmaru
[link] [comments]
Matrices are a key concept not only in linear algebra but also with regard to…
This paper delves into the challenging task of Active Speaker Detection (ASD), where the system…
Based on original post by Dr. Hemant Joshi, CTO, FloTorch.ai A recent evaluation conducted by…
As AI creates opportunities for business growth and societal benefits, we’re working to reduce their…
PlayStation characters may one day engage you in theoretically endless conversations, if a new internal…
The latest 15-inch MacBook Air is bluer and better than ever before—and it dropped in…