Midjourney, a popular AI image generating service, has introduced a new feature that tackles a significant challenge faced by AI generators. The ability to consistently recreate characters across different images has been a longstanding obstacle due to the nature of AI image generators relying on diffusion models. These models, reminiscent of Stability AI’s Stable Diffusion open-source algorithm, work by synthesizing images pixel-by-pixel based on textual prompts. However, like other generative AI applications, the issue lies in the inconsistency of responses, where each prompt generates a new output.
In response to this limitation, Midjourney has unveiled a new tag, “–cref” (character reference), which users can append to their prompts in the Midjourney Discord. This tag aims to match character features like facial expressions, body types, and attire from a provided URL, allowing for more coherent character continuity in visual storytelling or other creative endeavors.
By utilizing previously generated Midjourney images, users can enhance the accuracy of character reproduction by tailoring the “–cw” tag, which adjusts the closeness of the new image to the original reference. The feature, while still evolving, shows promise in bridging the gap between casual ideation and professional application within the creative realm.
This innovative feature has already sparked interest among artists and creators who are exploring its potential. The founder, David Holz, outlined the functionality, emphasizing its ability to blend character traits from multiple images and facilitating the creation of cohesive visual narratives. As Midjourney continues to refine this feature and prepare for its official beta release, users are encouraged to experiment with incorporating character references into their creative processes.