Wan2.2 Animate Test

Wan2.2 animate is a great tool for motion transfer and swapping characters using ref images. Follow me for more: https://www.instagram.com/mrabujoe submitted by /u/flipflop-dude [link] [comments]

Open Source Nano Banana for Video 🍌🎥

Hi! We are building “Open Source Nano Banana for Video” – here is open source demo v0.1 We call it Lucy Edit and we release it on hugging face, comfyui and with an api on fal and on our platform Read more here! https://x.com/DecartAI/status/1968769793567207528 Super excited to hear what you think and how we can …

China bans Nvidia AI chips

What does this mean for our favorite open image/video models? If this succeeds in getting model creators to use Chinese hardware, will Nvidia become incompatible with open Chinese models? submitted by /u/Ken-g6 [link] [comments]

puE k6mJemm4mm67YN70 gWLYzuE oSalihy7hCAtnw

Pose Transfer V2 Qwen Edit Lora [fixed]

I took everyone’s feedback and whipped up a much better version of the pose transfer lora. You should see a huge improvement without needing to mannequinize the image before hand. There should be much less extra transfer (though it’s still there occasionally). The only thing still not amazing is it’s cartoon pose understanding but I’ll …

Bytedance release the full safetensor model for UMO – Multi-Identity Consistency for Image Customization . Obligatory beg for a ComfyUI node 🙏🙏

https://huggingface.co/bytedance-research/UMO https://arxiv.org/pdf/2509.06818 Bytedance have released 3 days ago their image editing/creation model UMO. From their huggingface description: Recent advancements in image customization exhibit a wide range of application prospects due to stronger customization capabilities. However, since we humans are more sensitive to faces, a significant challenge remains in preserving consistent identity while avoiding identity confusion …

WamVBXxgyuIZPCpGXiKQZANorIFYKFLmI398vBAa Cs

RecA: A new finetuning method that doesn’t use image captions.

https://arxiv.org/abs/2509.07295 “We introduce Reconstruction Alignment (RecA), a resource-efficient post-training method that leverages visual understanding encoder embeddings as dense “text prompts,” providing rich supervision without captions. Concretely, RecA conditions a UMM on its own visual understanding embeddings and optimizes it to reconstruct the input image with a self-supervised reconstruction loss, thereby realigning understanding and generation.” https://huggingface.co/sanaka87/BAGEL-RecA …