VrW0MDB1j3bbZJ 3nO FR1aoPIjBj3UUlSad ai6FA
| Hello. This may not be news to some of you, but Wan 2.1 can generate beautiful cinematic images. I was wondering how Wan would work if I generated only one frame, so to use it as a txt2img model. I am honestly shocked by the results. All the attached images were generated in fullHD (1920x1080px) and on my RTX 4080 graphics card (16GB VRAM) it took about 42s per image. I used the GGUF model Q5_K_S, but I also tried Q3_K_S and the quality was still great. The workflow contains links to downloadable models. Workflow: [https://drive.google.com/file/d/1WeH7XEp2ogIxhrGGmE-bxoQ7buSnsbkE/view] The only postprocessing I did was adding film grain. It adds the right vibe to the images and it wouldn’t be as good without it. Last thing: For the first 5 images I used sampler euler with beta scheluder – the images are beautiful with vibrant colors. For the last three I used ddim_uniform as the scheluder and as you can see they are different, but I like the look even though it is not as striking. 🙂 Enjoy. submitted by /u/yanokusnir |
submitted by /u/Many-Ad-6225 [link] [comments]
When building machine learning models, most developers focus on model architectures and hyperparameter tuning.
Wearable devices record physiological and behavioral signals that can improve health predictions. While foundation models…
Today, we’re excited to announce a significant improvement to the developer experience of Amazon Bedrock:…
Today, we're making it even easier to achieve breakthrough performance for your AI/ML workloads: Google…
Model Context Protocol, or MCP, is gaining momentum. But, not everyone is fully onboard yet,…