Been using Z-Image Turbo pretty heavily since it dropped and wanted to dump some notes here because I kept seeing the same complaints I had on day one and nobody was really answering them properly.
The thing I kept running into: every portrait looked like a skincare ad. Glossy skin, symmetrical face, that weird “influencer default” look. I tried every SDXL trick I knew. “Average person”, “realistic”, “not a model”, “amateur photo”, “candid”. Basically nothing moved the needle. I was ready to write the model off as another Flux-lite.
Then I saw 90hex’s post here a while back about using actual photography vocabulary and something clicked. I’d been prompting Z-Image like it was SDXL when the encoder is clearly trained on way more specific stuff. Once I started naming actual cameras and film stocks instead of emotional modifiers, the plastic problem basically evaporated.
A few things that genuinely surprised me:
- “Point-and-shoot film camera” is the single highest-leverage phrase I’ve found. Drops the model out of beauty-default mode faster than any combination of “realistic/candid/amateur” ever did. “35mm film camera” works too. “iPhone snapshot with handheld imperfection” works. “Disposable camera” works. The common thread is naming a physical piece of gear with a real visual fingerprint.
- Words like “masterpiece, 8k, etc” do almost nothing. I ran A/B tests on 20 prompts with and without the usual quality spam and the outputs were basically indistinguishable. The S3-DiT encoder clearly wasn’t trained on that vocabulary the way SD1.5 was. Replace that whole block with one camera + one film stock and you get way more signal per token.
- Negative prompts are legitimately dead at cfg 0. I know the docs say this but I didn’t fully believe it until I tested. Putting “blurry, ugly, deformed, bad anatomy” in the negative field does absolutely nothing at the default cfg. If you bump cfg to 1.2-2.0 in Comfy some effect comes back but Turbo starts overcooking and the speed advantage evaporates. Just write constraints as presence instead. “Clean studio background, sharp focus, plain seamless backdrop” is way more effective than any negative prompt I tried.
- The bracket trick is the best-kept secret in this community. 90hex mentioned it in passing and I don’t think people realize how powerful it is for building character consistency without training a LoRA. Wrap alternatives in {this|that|the other} inside one prompt, batch 32, and you get an entire photoshoot of the same person across different cameras, lighting, poses, and moods. I’ve been using it to build reference libraries for characters I want to stay consistent across a short series. Zero training required. It’s absurd.
- Attention cap is real. Past about 75-100 effective tokens the model starts to drift. If you’re writing 400-word prompts (I was) you’re actively hurting yourself. 3-5 strong concepts, subject first, any quoted text second. The rest is gravy.
- Prefix/suffix style presets are a cheat code. Saw DrStalker’s 70-styles post a while back and started building my own table. Same base scene wrapped in different style prefix/suffix pairs gives you a pile of completely different looks with zero rewriting. Cinematic photo, medium format, analog film, Ansel Adams landscape, neon noir, dieselpunk, Ghibli-like, Moebius-like, pixel art, stained glass. Game changer for iteration speed.
The prompt that finally unstuck me:
First time I got an output that looked like an actual person I’d see on the street and not a magazine cover. The trick is stacking “realistic ordinary everyday” (which does nothing alone) with a specific equipment spec (which does everything). The equipment word is the anchor. The ordinary words only work once the anchor is there.
A few more things I’ve been testing that seem to work:
- “Shot on Kodak Portra 400” for warm skin tones that don’t look airbrushed
- “Ilford HP5 black and white” for actual film B&W grain that looks better than any “monochrome high contrast” prompt I tried
- “Cinestill 800T” for night scenes with that halation glow around lights
- Adding “slightly asymmetrical features” or “faint laugh lines” to portraits kills the symmetry default
- “On-board flash falloff” gives you that candid snapshot look with the harsh foreground light and falling-off background
Stuff I’m still figuring out:
- LoRA weights feel different than SDXL. Anything above 0.85 tends to overcook. Anyone else seeing this?
- Text rendering is good but seems to tank if the prompt is too long. I think the model budgets attention between scene description and typography and long prompts starve the text encoder. Curious if others have tested this.
- Bilingual prompts (EN + CN in the same prompt) sometimes produce better English typography than pure EN prompts. No idea why. Might be a training data quirk.
- Hands are genuinely fixed but feet still look weird like 30% of the time. Haven’t found a reliable fix yet.
https://preview.redd.it/zrkeynx1ndug1.jpg?width=1920&format=pjpg&auto=webp&s=6ca058e66cc4c7e174f2f07ce5f6499cb15694d7
https://preview.redd.it/v557bkw7pdug1.jpg?width=1920&format=pjpg&auto=webp&s=250b92caf4634f2e40cc588728bcfdb96ec1ad2d
https://preview.redd.it/jhtxz9ecpdug1.jpg?width=1920&format=pjpg&auto=webp&s=3ba407eb55529659d95e8aca043076eea025ce3f
https://preview.redd.it/4ezi3rmhpdug1.jpg?width=1920&format=pjpg&auto=webp&s=5df585e2ced71d89e5b826941155e62a046a7f1e
https://preview.redd.it/ymibzw0lpdug1.jpg?width=1920&format=pjpg&auto=webp&s=13a51528f6849298b25e69054e3335eb65bdf741
https://preview.redd.it/c740vz9ppdug1.jpg?width=1920&format=pjpg&auto=webp&s=078a0239cc2a424c27a9b75c5a35881310b22b54