Midjourney strikes back with its new Omni-Reference feature
Omni-Reference is the new Midjourney's offer for consistent characters
I was vocal about the shortcomings of Midjourney v7 at first.
It turns out, I was just a bit impatient.
In just a few weeks, they've made significant improvements.
One of the standout updates is Omni-Reference.
Midjourney is rolling out this highly anticipated feature, which promises to elevate the level of detail and control in AI-generated images. Omni-Reference allows users to seamlessly integrate specific image references directly into their prompts, making it easier than ever to generate the exact visual elements they envision.
What is Omni-Reference?
Omni-Reference is a new image reference system that can replicate the functionality of the 'character references' feature from Midjourney’s V6, but with even more flexibility and power.
It allows users to directly include images, such as characters, objects, vehicles, or even non-human creatures, into their image generations.
Think of it as a way to tell the AI: “I want this in my image” with a level of specificity that wasn’t possible before.
Take a look at some of the examples below:
How to Use Omni-Reference
The process of incorporating Omni-Reference into your prompt is straightforward:
On the web: Simply drag your image into the prompt bar and drop it into the bin labeled “omni-reference.” A slider icon will appear, allowing you to control the strength of the reference.
On Discord: You can use the command
--oref [url]
, where you input the image URL, and--ow
to control how closely the AI adheres to the reference.
Let’s say we meet a girl.
Photograph, Frontal Shot, portrait of 20 year old woman, with long black hair and ponytail, white tight tshirt, white background, studio lights, passport shot --ar 4:5 --raw --p --stylize 250
We can take a good look of her.
Photograph, Side Profile Shot, 20 year old woman, with long black hair and ponytail, white tight tshirt, white background, studio lights, passport shot --ar 4:5 --raw --p --ow 20 --stylize 250 --oref [url of the previous image]
She likes to workout, and every now and then she changes her hairstyle.
Photograph, full body, 20 year old athletic beautiful latin woman, with a shaved military hairstyle, running fast, speed, motion, movement, simple black sport tshirt, close up intensity, cinematic still --ar 4:5 --raw --p --ow 20 --stylize 250 --oref [url of the previous image]
But she is not a regular girl. She’s actually a cyborg.
Extreme close-up of a cyborg woman looking at the camera made of subtle futuristic transparent electronic pieces, pastel bold geometric patterns, high-fashion outfit, sky-blue background, minimalist portrait, high tonal range --ar 4:5 --raw --p --ow 50 --stylize 250 --oref [url of the previous image]
What do you think? Good or bad?
Omni-Weight: The secret to control
The magic behind Omni-Reference lies in the “omni-weight” parameter, represented by --ow
. This controls how strictly the AI follows your reference. The range goes from 0 to 1000, with 100 being the default.
Low Omni-Weight (e.g., --ow 25): This is ideal for cases where you want to preserve the reference in a more abstract or stylized way. For example, transforming a photo into an anime style without adhering strictly to the reference.
High Omni-Weight (e.g., --ow 400): This ensures that the AI focuses more on faithfully replicating your reference, such as when you want a character's face or clothing to be clearly visible.
Note: If you are not getting the results you want, for example, you are only getting front shots of your character but your prompt details a side view shot, it may mean your Omni-Weight is too high.
Practical Tips for Using Omni-Reference
Personalization: Omni-Reference can be used to maintain personal traits in your images. For example, if you want to preserve the specific features of a character (like a character holding a sword), include the sword image in your reference and specify it in your prompt.
Multiple References: If you want to include two characters or objects in your image, it’s possible to refer to both in your prompt, even if they appear in different images or side by side. This can help generate images that stay true to multiple reference points.
Finally, Style Transfer
You can use this parameter to convert characters from one style to another, considering obviously the changes that implies.
Let’s see two examples:
[3D Pixar / Anime]-style full body woman, with a [clothing description], animation, highly detailed --ar 4:5 --raw --ow 20 --stylize 250 --oref [url of the original character]
Not bad. Not great either.
If you’re attempting a style transfer but want to preserve certain aspects of your reference (e.g., blonde hair and red suspenders for an anime character), you can use a lower omni-weight and over-specify what needs to be preserved.
Keep in mind the feature is still in its testing phase. It has potential for creating highly customized and detailed images.
We’ll see how it evolves. I’m now looking forward to it.