Meet your new Casting Director: Runway Gen-4 References
The AI feature bringing continuity to your characters, locations, and creative sanity.
In the fast-evolving world of AI-generated video and imagery, there's one recurring headache for creators: visual consistency.
Want your protagonist to survive across scenes without changing gender, age, or hairstyle? Until now, that’s been a bit like asking my kids to do their chores two days in a row, technically possible, but unlikely.
Enter Runway Gen-4 References, a new feature in Runway’s Gen-4 model designed to solve this exact problem.
With it, users can upload up to three reference images, be it a selfie, a 3D render, or a still from a previous frame, and generate new visuals that stay true to those visual cues. Characters, environments, lighting, even clothing: all preserved without needing to retrain the model or fine-tune settings.
Why this is important?
Consistency, often the Achilles’ heel of AI-generated content, is now an asset. With Gen-4 References, creators can build scenes where characters keep their faces, locations don’t morph inexplicably, and lighting remains coherent across angles.
It’s a small checkbox on the interface, but a big leap for narrative cohesion.
This is especially crucial for short films, music videos, and visual storytelling, where a character’s identity or a location’s ambiance drives the plot. Rather than brute-forcing continuity with endless tweaking, or worse, praying to the Prompt Gods, creators can now maintain control with just a few clicks.
Blah, blah, blah. Let’s see it in action.
Keep reading with a 7-day free trial
Subscribe to The AI Video Creator to keep reading this post and get 7 days of free access to the full post archives.