QWEN IMAGE Gen as single source image to a dynamic Widescreen Video Concept (WAN 2.2 FLF), minor edits with new (QWEN EDIT 2509).
submitted by /u/-Ellary- [link] [comments]
submitted by /u/-Ellary- [link] [comments]
Just doing something a little different on this video. Testing Wan-Animate and heck while I’m at it I decided to test an Infinite Talk workflow to provide the narration. WanAnimate workflow I grabbed from another post. They referred to a user on CivitAI: GSK80276 For InfiniteTalk WF u/lyratech001 posted one on this thread: https://www.reddit.com/r/comfyui/comments/1nnst71/infinite_talk_workflow/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button submitted …
Read more “Wan2.2 Animate and Infinite Talk – First Renders (Workflow Included)”
This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit Qwen Chat and select the “Image Editing” feature. Compared with Qwen-Image-Edit released in August, the main improvements of Qwen-Image-Edit-2509 include: Multi-image Editing Support: For multi-image inputs, Qwen-Image-Edit-2509 builds upon the Qwen-Image-Edit architecture and is further …
I’m currently testing the limits and capabilities of Qwen Image Edit. It’s a slow process, because apart from the basics, information is scarce and thinly spread. Unless someone else beats me to it or some other open source SOTA model comes out before I’m finished, I plan to release a full guide once I’ve collected …
Wan2.2 animate is a great tool for motion transfer and swapping characters using ref images. Follow me for more: https://www.instagram.com/mrabujoe submitted by /u/flipflop-dude [link] [comments]
The meme possibilities are way too high. I did this with the native github code on an RTX pro 6000. It took a while, maybe just under 1h with the preprocessing and the generation? i wasn’t really checking submitted by /u/bullerwins [link] [comments]
Hi! We are building “Open Source Nano Banana for Video” – here is open source demo v0.1 We call it Lucy Edit and we release it on hugging face, comfyui and with an api on fal and on our platform Read more here! https://x.com/DecartAI/status/1968769793567207528 Super excited to hear what you think and how we can …
What does this mean for our favorite open image/video models? If this succeeds in getting model creators to use Chinese hardware, will Nvidia become incompatible with open Chinese models? submitted by /u/Ken-g6 [link] [comments]
I took everyone’s feedback and whipped up a much better version of the pose transfer lora. You should see a huge improvement without needing to mannequinize the image before hand. There should be much less extra transfer (though it’s still there occasionally). The only thing still not amazing is it’s cartoon pose understanding but I’ll …
submitted by /u/JackKerawock [link] [comments]