Categories: FAANG

Local Mechanisms of Compositional Generalization in Conditional Diffusion

Conditional diffusion models appear capable of compositional generalization, i.e., generating convincing samples for out-of-distribution combinations of conditioners, but the mechanisms underlying this ability remain unclear. To make this concrete, we study length generalization, the ability to generate images with more objects than seen during training. In a controlled CLEVR setting (Johnson et al.,2017), we find that length generalization is achievable in some cases but not others, suggesting that models only sometimes learn the underlying compositional structure. We then investigate…
AI Generated Robotic Content

Recent Posts

Looneytunes background style for ZIT

So, only seven months after the SDXL version, here's a civitai link to the Z-Image…

3 mins ago

Connecting Agents to Decisions

The Palantir OntologyPalantir’s software powers real-time, human-agent decision-making in many of the most critical commercial and…

3 mins ago

Migrating a text agent to a voice assistant with Amazon Nova 2 Sonic

Migrating a text agent to a voice assistant is increasingly important because users expect faster,…

4 mins ago

50+ fully managed MCP servers now available for Google Cloud services

At Google Cloud Next ‘26, we announced that more than 50 Google-managed Model Context Protocol…

4 mins ago

OpenAI Really Wants Codex to Shut Up About Goblins

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless…

1 hour ago