This sub has had a distinct lack of dancing 1girls lately

So many posts with actual new model releases and technical progression, why can’t we go back to the good old times where people just posted random waifus? /s Just uses the standard Wan 2.2 I2V workflow with a wildcard prompt like the following repeated 4 or 5 times: {hand pops|moving her body and shaking her …

How can I do this on Wan Vace?

I know wan can be used with pose estimators for TextV2V, but I’m unsure about reference images to videos. The only one I know that can use ref image to video is Unianimate. A workflow or resources for this in Wan Vace would be super helpful! submitted by /u/Fresh_Sun_1017 [link] [comments]

Sydney’s Comfy Tips

Made with Kijai’s infiniteTalk workflow and Higgs Audio for the voice. https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_I2V_InfiniteTalk_example_02.json https://huggingface.co/bosonai/higgs-audio-v2-generation-3B-base submitted by /u/Race88 [link] [comments]

Experimenting with Continuity Edits | Wan 2.2 + InfiniteTalk + Qwen Image Edit

Here is the Episode 3 of my AI sci-fi film experiment. Earlier episodes are posted here or you can see them on www.youtube.com/@Stellarchive This time I tried to push continuity and dialogue further. A few takeaways that might help others: Making characters talk is tough. Huge render times and often a small issue is enough …

Made a local AI pipeline that yells at drivers peeing on my house

Last week I built a local pipeline where a state machine + LLM watches my security cam and yells at Amazon drivers peeing on my house. State machine is the magic: it flips the system from passive (just watching) to active (video/audio ingest + ~1s TTS out) only when a trigger hits. Keeps things deterministic …