How can I do this on Wan Vace?

I know wan can be used with pose estimators for TextV2V, but I’m unsure about reference images to videos. The only one I know that can use ref image to video is Unianimate. A workflow or resources for this in Wan Vace would be super helpful! submitted by /u/Fresh_Sun_1017 [link] [comments]

Sydney’s Comfy Tips

Made with Kijai’s infiniteTalk workflow and Higgs Audio for the voice. https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_I2V_InfiniteTalk_example_02.json https://huggingface.co/bosonai/higgs-audio-v2-generation-3B-base submitted by /u/Race88 [link] [comments]

Experimenting with Continuity Edits | Wan 2.2 + InfiniteTalk + Qwen Image Edit

Here is the Episode 3 of my AI sci-fi film experiment. Earlier episodes are posted here or you can see them on www.youtube.com/@Stellarchive This time I tried to push continuity and dialogue further. A few takeaways that might help others: Making characters talk is tough. Huge render times and often a small issue is enough …

Made a local AI pipeline that yells at drivers peeing on my house

Last week I built a local pipeline where a state machine + LLM watches my security cam and yells at Amazon drivers peeing on my house. State machine is the magic: it flips the system from passive (just watching) to active (video/audio ingest + ~1s TTS out) only when a trigger hits. Keeps things deterministic …

WanFaceDetailer

I made a workflow for detailing faces in videos (using Impack-Pack). Basically, it uses the Wan2.2 Low model for 1-step detailing, but depending on your preference, you can change the settings or may use V2V like Infinite Talk. Use, improve and share your results. !! Caution !! It uses loads of RAM. Please bypass Upscale …