Wan2.2 Animate Test
Wan2.2 animate is a great tool for motion transfer and swapping characters using ref images. Follow me for more: https://www.instagram.com/mrabujoe submitted by /u/flipflop-dude [link] [comments]
Category Added in a WPeMatico Campaign
Wan2.2 animate is a great tool for motion transfer and swapping characters using ref images. Follow me for more: https://www.instagram.com/mrabujoe submitted by /u/flipflop-dude [link] [comments]
The meme possibilities are way too high. I did this with the native github code on an RTX pro 6000. It took a while, maybe just under 1h with the preprocessing and the generation? i wasn’t really checking submitted by /u/bullerwins [link] [comments]
Hi! We are building “Open Source Nano Banana for Video” – here is open source demo v0.1 We call it Lucy Edit and we release it on hugging face, comfyui and with an api on fal and on our platform Read more here! https://x.com/DecartAI/status/1968769793567207528 Super excited to hear what you think and how we can …
What does this mean for our favorite open image/video models? If this succeeds in getting model creators to use Chinese hardware, will Nvidia become incompatible with open Chinese models? submitted by /u/Ken-g6 [link] [comments]
I took everyone’s feedback and whipped up a much better version of the pose transfer lora. You should see a huge improvement without needing to mannequinize the image before hand. There should be much less extra transfer (though it’s still there occasionally). The only thing still not amazing is it’s cartoon pose understanding but I’ll …
submitted by /u/JackKerawock [link] [comments]
https://huggingface.co/bytedance-research/UMO https://arxiv.org/pdf/2509.06818 Bytedance have released 3 days ago their image editing/creation model UMO. From their huggingface description: Recent advancements in image customization exhibit a wide range of application prospects due to stronger customization capabilities. However, since we humans are more sensitive to faces, a significant challenge remains in preserving consistent identity while avoiding identity confusion …
https://arxiv.org/abs/2509.07295 “We introduce Reconstruction Alignment (RecA), a resource-efficient post-training method that leverages visual understanding encoder embeddings as dense “text prompts,” providing rich supervision without captions. Concretely, RecA conditions a UMM on its own visual understanding embeddings and optimizes it to reconstruct the input image with a self-supervised reconstruction loss, thereby realigning understanding and generation.” https://huggingface.co/sanaka87/BAGEL-RecA …
Read more “RecA: A new finetuning method that doesn’t use image captions.”
submitted by /u/-Ellary- [link] [comments]
I created this animation as part of my tests to find the balance between image quality and motion in low-step generation. By combining LightX Loras, I think I’ve found the right combination to achieve motion that isn’t slow, which is a common problem with LightX Loras. But I still need to work on the image …