Categories: Image

Text-to-image comparison. FLUX.1 Krea [dev] Vs. Wan2.2-T2V-14B (Best of 5)

Note, this is not a “scientific test” but a best of 5 across both models. So in all 35 images for each so will give a general impression further down.

Exciting that text-to-image is getting some love again. As others have discovered Wan is very good as a image model. So I was trying to get a style which is typically not easy. A type of “boring” TV drama still with a realistic look. I didn’t want to go all action movie like because being able to create more subtle images I find a lot more interesting.

Images alternate between FLUX.1 Krea [dev] first (odd image numbers) then Wan2.2-T2V-14B(even image numbers)

The prompts were longish natural language prompts 150 or so words.

FLUX1. Krea was default settings except for lowering CFG from 3.5 to 2. 25 steps

Wan2.2-T2V-14B was a basic t2v workflow using the Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32 lora at 0.6 stength to speed but that obviusly does have a visual impact (good or bad).

General observations.

The Flux model had a lot more errors, with wonky hands, odd anatomy etc. I’d say 4 out of 5 were very usable from Wan, but only 1 or less was for Flux.

Flux also really didn’t like freckles for some reason. And gave a much more contrasty look which I didn’t ask for however the lighting in general was more accurate for Flux.

Overall I think Wan’s images look a lot more natural in the facial expressions and body language.

Be intersted to hear what you think. I know this isn’t exhaustive in the least but I found it interesting atleast.

submitted by /u/legarth
[link] [comments]

AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content
Tags: ai images

Recent Posts

Update: Distilled v1.1 is live

We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with…

20 hours ago

How to Implement Tool Calling with Gemma 4 and Python

The open-weights model ecosystem shifted recently with the release of the

20 hours ago

Structured Outputs vs. Function Calling: Which Should Your Agent Use?

Language models (LMs), at their core, are text-in and text-out systems.

20 hours ago

Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts

This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation…

20 hours ago

How to build effective reward functions with AWS Lambda for Amazon Nova model customization

Building effective reward functions can help you customize Amazon Nova models to your specific needs,…

20 hours ago

How to find the sweet spot between cost and performance

At Google Cloud, we often see customers asking themselves: "How can we manage our generative…

20 hours ago