Tools for this?

What tools are used for these type of videos?I was thinking face fusion or some kind of face swap tool in stable diffusion.Could anybody help me? submitted by /u/vasthebus [link] [comments]

7Fh5kHd7wscnMcixFws am5I66ZzsbpljJiywsPZLmQ

(Crypto)Miner loaded when starting A1111

Since some time now, I noticed, that when I start A1111, some miners are downloaded from somewhere and stop A1111 from starting. Under my user name, a folder was created (.configs) and inside there will then be a file called update.py and often 2 random named folders that contain various miners and .bat files. Also …

Not Human: Z-Image Turbo – Wan 2.2 – RTX 2060 Super 8GB VRAM

Boring day… so I had to do something 🙂 3 segments… 832×480… 4 steps… then upscaled (Topaz Video). Generation time: ~350/450 seconds per segment. Used Clipchamp to edit the final video. Workflows: https://drive.google.com/file/d/1Z57p3yzKhBqmRRlSpITdKbyLpmTiLu_Y/view?usp=sharing For more info read my previous posts: https://www.reddit.com/r/StableDiffusion/comments/1prs5h3/rider_zimage_turbo_wan_22_rtx_2060_super_8gb_vram/ https://www.reddit.com/r/StableDiffusion/comments/1pqq8o5/two_worlds_zimage_turbo_wan_22_rtx_2060_super_8gb/ https://www.reddit.com/r/StableDiffusion/comments/1pko9vy/fighters_zimage_turbo_wan_22_flftv_rtx_2060_super/ https://www.reddit.com/r/StableDiffusion/comments/1pi6f4k/a_mix_inspired_by_some_films_and_video_games_rtx/ https://www.reddit.com/r/comfyui/comments/1pgu3i1/quick_test_zimage_turbo_wan_22_flftv_rtx_2060/ https://www.reddit.com/r/comfyui/comments/1pe0rk7/zimage_turbo_wan_22_lightx2v_8_steps_rtx_2060/ https://www.reddit.com/r/comfyui/comments/1pc8mzs/extended_version_21_seconds_full_info_inside/ submitted by /u/MayaProphecy [link] [comments]

Former 3D Animator trying out AI, Is the consistency getting there?

Attempting to merge 3D models/animation with AI realism. Greetings from my workspace. I come from a background of traditional 3D modeling. Lately, I have been dedicating my time to a new experiment. This video is a complex mix of tools, not only ComfyUI. To achieve this result, I fed my own 3D renders into the …

Qwen-Image-Edit-2511 got released.

https://www.modelscope.cn/models/Qwen/Qwen-Image-Edit-2511 https://huggingface.co/Qwen/Qwen-Image-Edit-2511 https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF submitted by /u/Total-Resort-3120 [link] [comments]

SAM Audio: the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts

SAM-Audio is a foundation model for isolating any sound in audio using text, visual, or temporal prompts. It can separate specific sounds from complex audio mixtures based on natural language descriptions, visual cues from video, or time spans. https://ai.meta.com/samaudio/ https://huggingface.co/collections/facebook/sam-audio https://github.com/facebookresearch/sam-audio submitted by /u/fruesome [link] [comments]

QZ9dMBh KXvMzgX9MiJKS6oiSJqHp7ENWPwCgdWrC4U

Let’s make some realistic humans: Now with Z-Image [Tutorial] – More examples and Info in Comments

This is a refresh of my tutorial on [how to make realistic](https://www.reddit.com/r/StableDiffusion/comments/10yn8y7/lets_make_some_realistic_humans_tutorial/) people, and [how to make realistic people with SDXL](https://www.reddit.com/r/StableDiffusion/comments/16opi4h/lets_make_some_realistic_humans_now_with_sdxl/), and [let’s make realistic humans with flux](https://www.reddit.com/r/StableDiffusion/comments/1enrkyz/lets_make_some_realistic_humans_now_with_flux/), but this time we will be using the Z Image model.. *Special Note = imgpile currently has something going on, so many of the old SDXL images …