| | I saw a reel showing Elsa (and other characters) doing TikTok dances. The animation used a real dance video for motion and a single image for the character. Face, clothing, and body physics looked consistent, aside from some hand issues. I tried doing the same with Wan2.1 VACE. My results aren’t bad, but they’re not as clean or polished. The movement is less fluid, the face feels more static, and generation takes a while. Questions: How do people get those higher-quality results? Is Wan2.1 VACE the best tool for this? Are there any platforms that simplify the process? like Kling AI or Hailuo AI submitted by /u/Illustrious-Sector-7 |
We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with…
The open-weights model ecosystem shifted recently with the release of the
Language models (LMs), at their core, are text-in and text-out systems.
This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation…
Building effective reward functions can help you customize Amazon Nova models to your specific needs,…
At Google Cloud, we often see customers asking themselves: "How can we manage our generative…