When she says she only likes open source dudes
submitted by /u/Jack_Fryy [link] [comments]
submitted by /u/Jack_Fryy [link] [comments]
submitted by /u/XMasterrrr [link] [comments]
We just released RadialAttention, a sparse attention mechanism with O(nlogn) computational complexity for long video generation. 🔍 Key Features: ✅ Plug-and-play: works with pretrained models like #Wan, #HunyuanVideo, #Mochi ✅ Speeds up both training&inference by 2–4×, without quality loss All you need is a pre-defined static attention mask! ComfyUI integration is in progress and will …
Read more “Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation”
Flux Kontext can change a poster title/text while keeping the font and style. It’s really simple, just a simple prompt. Prompt: “replace the title “The New Avengers” with “Temu Avengers”, keep the typography and style, reduce font size to fit.” Workflow: https://github.com/casc1701/workflowsgalore/blob/main/Flux%20Kontext%20I2I submitted by /u/nazihater3000 [link] [comments]
I saw a reel showing Elsa (and other characters) doing TikTok dances. The animation used a real dance video for motion and a single image for the character. Face, clothing, and body physics looked consistent, aside from some hand issues. I tried doing the same with Wan2.1 VACE. My results aren’t bad, but they’re not …
Read more “How are these AI TikTok dance videos made? (Wan2.1 VACE?)”
submitted by /u/Dry-Resist-4426 [link] [comments]
submitted by /u/OrangeFluffyCatLover [link] [comments]
You can find the workflow by scrolling down on this page: https://comfyanonymous.github.io/ComfyUI_examples/flux/ submitted by /u/comfyanonymous [link] [comments]
credit to @ unreelinc submitted by /u/Leading_Primary_8447 [link] [comments]
100% Made with opensource tools: Flux, WAN2.1 Vace, MMAudio and DaVinci Resolve. submitted by /u/Race88 [link] [comments]