Midjourney Finally Hits “Play”
How the new V-1 Video Model shifts the platform from beautiful stills to living, breathing clips, and what that means for your workflow today.
When Midjourney’s founder David Holz started talking about real-time, open-world simulation back in 2023, a lot of us smiled politely and went back to compositing our static hero shots.
Fast-forward to June 2025 and the first tangible building block of that vision has landed: V-1 Video, an image-to-video (I2V) model that turns any Midjourney art, or even an external still, into a five-second motion clip. It’s not the holodeck yet, but it’s a very real step in that direction.
What ships today
The pricing math
Midjourney will be charging about 8x more for a video job than an image job and each job will produce four 5-second videos. This means a video is about the same cost as an upscale. Or about “one image worth of cost” per second of video.
Plans: Basic users can play, but you’ll burn Fast-minutes quickly. If video becomes core to your pipeline, upgrading one tier (or using Relax-Video on Pro) keeps budgets sane.
A starter workflow (tested this morning)
Generate stills as usual—V-7, Niji, whatever your heart desires.
Animate:
Automatic first (good baseline).
If timing or arcs feel off, switch to Manual and add a prompt
Extend: Stitch together up to 20 seconds inside Midjourney.
Polish externally: Drop the clip(s) into Capcut or Premiere for sound, pacing, and color tweaks. If resolution is a bottleneck, run the final render through Topaz Video AI or Magnific.
Let’s watch some examples:
Video by @astronomerozge1
Video by @aziz4ai
Video by @blueveedesign
Video by @bygen_ai
Video by @hc_dsn
Video by @JesusPlazaX
Video by @jossslopez
Video by @juliewdesign_
Video by @kattlatte
Video by @LudovicCreator
Video by @Morph_VGart
Happy experimenting, may your frames stay smooth, your budgets stay sane, and your coffee stay out of frame on High Motion.