Z-Image-Turbo vs Qwen Image 2512
submitted by /u/Artefact_Design [link] [comments]
Category Added in a WPeMatico Campaign
submitted by /u/Artefact_Design [link] [comments]
submitted by /u/IAmGlaives [link] [comments]
What tools are used for these type of videos?I was thinking face fusion or some kind of face swap tool in stable diffusion.Could anybody help me? submitted by /u/vasthebus [link] [comments]
Since some time now, I noticed, that when I start A1111, some miners are downloaded from somewhere and stop A1111 from starting. Under my user name, a folder was created (.configs) and inside there will then be a file called update.py and often 2 random named folders that contain various miners and .bat files. Also …
Boring day… so I had to do something 🙂 3 segments… 832×480… 4 steps… then upscaled (Topaz Video). Generation time: ~350/450 seconds per segment. Used Clipchamp to edit the final video. Workflows: https://drive.google.com/file/d/1Z57p3yzKhBqmRRlSpITdKbyLpmTiLu_Y/view?usp=sharing For more info read my previous posts: https://www.reddit.com/r/StableDiffusion/comments/1prs5h3/rider_zimage_turbo_wan_22_rtx_2060_super_8gb_vram/ https://www.reddit.com/r/StableDiffusion/comments/1pqq8o5/two_worlds_zimage_turbo_wan_22_rtx_2060_super_8gb/ https://www.reddit.com/r/StableDiffusion/comments/1pko9vy/fighters_zimage_turbo_wan_22_flftv_rtx_2060_super/ https://www.reddit.com/r/StableDiffusion/comments/1pi6f4k/a_mix_inspired_by_some_films_and_video_games_rtx/ https://www.reddit.com/r/comfyui/comments/1pgu3i1/quick_test_zimage_turbo_wan_22_flftv_rtx_2060/ https://www.reddit.com/r/comfyui/comments/1pe0rk7/zimage_turbo_wan_22_lightx2v_8_steps_rtx_2060/ https://www.reddit.com/r/comfyui/comments/1pc8mzs/extended_version_21_seconds_full_info_inside/ submitted by /u/MayaProphecy [link] [comments]
I should I’ll be able to get this all up on GitHub tomorrow (27th December) with this workflow and docs and credits to the scientific paper I used to help me – Happy Christmas all – Pete submitted by /u/shootthesound [link] [comments]
submitted by /u/mtrx3 [link] [comments]
Attempting to merge 3D models/animation with AI realism. Greetings from my workspace. I come from a background of traditional 3D modeling. Lately, I have been dedicating my time to a new experiment. This video is a complex mix of tools, not only ComfyUI. To achieve this result, I fed my own 3D renders into the …
Read more “Former 3D Animator trying out AI, Is the consistency getting there?”
https://www.modelscope.cn/models/Qwen/Qwen-Image-Edit-2511 https://huggingface.co/Qwen/Qwen-Image-Edit-2511 https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF submitted by /u/Total-Resort-3120 [link] [comments]
Made this using mickmumpitz’s ComfyUI workflow that lets you animate movement by manually shifting objects or images in the scene. I tested both my higher quality camera and my iPhone, and for this demo I chose the lower quality footage with imperfect lighting. That roughness made it feel more grounded, almost like the movement was …