| Multi shot consistency was the test I cared about. Same girl across four cuts in different locations and lighting, with each shot using a different framing convention (long shot in the tide at dusk, side close up at a train window, rear tracking down a slope at sunset, environmental wide of a summer seaside station). Most models I had tried before either drift the character between shots or only hold consistency when the framing stays the same. What worked here was treating GPT Image 2 as the keyframe step (one storyboard frame composed of all four panels), then handing the still to HappyHorse to animate each shot in sequence. Her hair, outfit, and proportions held across every cut, and the soft warm Japanese animation grade transitioned cleanly from dusk to sunset to late afternoon without flickering between scenes. Ran it through MuleRun’s HappyHorse agent so I did not have to host weights. They are not publicly available as of 2026-04-27, so this is the easiest way I have found to actually try the model end to end. submitted by /u/Objective-Feed7250 |