How to create consistent character faces without training (info in the comments)
submitted by /u/stassius [link] [comments]
submitted by /u/stassius [link] [comments]
submitted by /u/Illustrious_Row_9971 [link] [comments]
submitted by /u/Tokyo_Jab [link] [comments]
submitted by /u/oksowhaat [link] [comments]
submitted by /u/Relevant_Yoghurt_74 [link] [comments]
submitted by /u/Worried-Tomato7070 [link] [comments]
submitted by /u/darkside1977 [link] [comments]
Delivers the ability to create animated avatars with a single photo Stability AI, the world’s leading open source generative AI company, and Revel.xyz, the social collectibles platform, announced today the launch of Animai, a consumer animation tool powered by Stability AI’s animation technology. Until now, generative art tools produced mostly static pictures. Today we introduce …
Read more “Stability AI Animation Technology Makes Its Debut With Revel.xyz’s Animai Application”
Information taken from the GitHub page: https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD HuggingFace checkpoints and diffusers integration: https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip Public web-demo: https://clipdrop.co/stable-diffusion-reimagine unCLIP is the approach behind OpenAI’s DALL·E 2, trained to invert CLIP image embeddings. We finetuned SD 2.1 to accept a CLIP ViT-L/14 image embedding in addition to the text encodings. This means that the model can be used …
I wrote this paper two years ago: https://arxiv.org/abs/2106.09685 Super happy that people find it useful for diffusion models. I had text in mind when I wrote the paper, so there are probably things we can tweak to make LoRA more suited for image generation. I want to better understand how exactly LoRA is used in …
Read more “I’m the creator of LoRA. How can I make it better?”