What is Midjourney V8.1 and why does the April 2026 release matter?
Midjourney V8.1 is the image model released on April 30, 2026 that ships HD output as the default, runs 4 to 5 times faster than V7, and finally makes style references and moodboards behave consistently. For Practitioners, that means fewer rerolls, lower credit burn, and reference-driven looks you can actually lock down across a project.
If you have been using Midjourney casually, you probably have not changed your prompting habits since V6 or V7. That is the problem. V8.1 reads prompts differently, holds detail more aggressively, and rewards specific phrasing over keyword stacking. The same prompt that gave you a great image six months ago now produces a flatter, more generic result.
This guide walks through the seven V8.1 features that change daily output quality the most, with a copy-paste prompt template at the end you can drop into the Midjourney web app today.
How does HD mode work and when should you use it?
HD mode in V8.1 generates native 2K resolution images (roughly 2048×2048 pixels) without a separate upscale step. According to Midjourney's V8.1 release notes, HD is now the default, so every standard job already runs at the higher resolution. The visible difference shows up in skin texture, fabric weave, and small environmental details that V7 had to fake during upscale.
The catch is credit consumption. HD jobs cost roughly the same as a standard plus upscale combined in V7, so you do not save money compared to V7 workflows. You save time and you skip the separate Upscale (Subtle) or Upscale (Creative) decision tree that V7 forced.
Use HD by default for any image headed to a client deck, a landing page, or print. Turn it off for ideation passes where you are testing 20 to 30 variations and only need 1K thumbnails to compare composition.
What are moodboards and srefs, and how do they save you time?
Moodboards and style references (srefs) are the two ways V8.1 lets you tell Midjourney what aesthetic you want without describing it in words. A moodboard is a saved collection of 6 to 20 reference images you upload once and reuse across prompts. An sref is a single image URL or numeric code you attach to one prompt with the --sref parameter.
In V7, srefs and moodboards drifted heavily. You would lock in a look on one image, then the next image in the same project would shift colour palette or lighting style. V8.1 fixes this. Midjourney's V8.1 update notes call out moodboards and srefs as the headline stability improvement.
The practical use case: build a moodboard for each client brand once, then run every prompt for that client with the moodboard attached. You stop re-describing the brand aesthetic in every prompt and your output stays on-brand across 50 to 100 images.
When should you turn on Raw mode?
Raw mode strips out Midjourney's default aesthetic styling so the model follows your prompt more literally. Add --raw to the end of any prompt, or toggle Raw in the web interface settings. Without Raw, Midjourney always biases output toward a cinematic, slightly stylised look, even when you ask for a flat product shot.
Turn Raw on for: product photography, technical illustrations, UI mockups, e-commerce hero shots, anything where the brief says "no artistic interpretation."
Leave Raw off for: editorial illustrations, brand campaigns, mood pieces, anything that benefits from Midjourney's house aesthetic. Raw off plus a clear sref is usually the sweet spot for editorial work, because the sref gives the look and Raw being off keeps Midjourney's natural lighting sensibility.
How do you use Omni Reference for character consistency?
Omni Reference is the feature that lets you reuse the same person, animal, or object across multiple images. You attach a reference image with --oref [image-url] and set the strength with --ow [0-1000]. The default omni-weight is around 100. Push it to 400 to 600 for strong character lock, or drop to 25 to 75 if you want the reference to influence vibe but not face shape.
The 2026 use case Practitioners run into most: building a consistent fictional character for a content series, a UGC-style ad campaign, or an explainer video. Before Omni Reference, you needed LoRA training on Stable Diffusion or a custom Sora character upload. V8.1 makes character consistency a one-line parameter.
Try this prompt to test it: take any portrait you have rights to, upload it to Midjourney, then run "professional headshot of [subject] at a tech conference, soft lighting --oref [your-image-url] --ow 500 --ar 3:4". The face should hold across reruns.
How does the new Describe feature change reverse-engineering prompts?
Describe is the Midjourney feature that takes an image you upload and returns four prompt suggestions that would generate something similar. V8.1's updated Describe writes prompts in the natural-language style V8.1 expects, instead of the keyword-comma-keyword format that older Describe outputs used.
This matters because the V8.1 prompt parser favours descriptive sentences over tag stacks. Old Describe outputs gave you something like "woman, reading, cafe, sunlight, warm, cozy" which V8.1 reads as a flat list. New Describe gives you "a woman reading in a sun-lit cafe corner, late afternoon light angling through tall windows" which V8.1 parses for spatial relationships and lighting direction.
Run Describe on three competitor images you wish you had made, then study the natural-language structure. Your own prompts should mimic that pattern. The change in output quality is immediate.
What is the V8.1 prompt structure that actually works in 2026?
The V8.1 prompt formula that consistently produces usable output follows a five-part order: subject, action, environment, lighting, parameters. Each part is a short descriptive phrase, not a single keyword. The model treats the order as significance, so put the most important visual element first.
Try this prompt template the next time you open Midjourney:
Try this prompt:
A confident Hong Kong businesswoman in a tailored navy suit, presenting a slide deck to a focused team, modern glass-walled conference room overlooking Victoria Harbour at golden hour, warm directional sunlight from camera-left casting long shadows, shallow depth of field, photorealistic --ar 16:9 --s 250 --sref [your-moodboard-code] --raw
Notice the structure: subject (businesswoman with attire detail), action (presenting), environment (glass conference room, harbour view), lighting (golden hour, directional sunlight, shadows), parameters (aspect ratio, stylize, sref, raw). Swap out the specifics for your own brief and the formula holds.
What are the common V8.1 mistakes Practitioners still make?
The biggest mistake is treating V8.1 like V7. Three habits to break: stuffing prompts with comma-separated keywords (V8.1 reads natural language better), upscaling everything (HD is already the default, so a separate upscale is wasted credits), and ignoring srefs (V8.1's biggest quality jump is in sref stability, so a workflow that does not use them leaves the gains on the table).
The second-biggest mistake is forgetting that V8.1 still struggles with rendered text. If your image needs to contain readable words, a logo, or a poster title, Midjourney will distort the spelling 60 to 80 percent of the time according to community testing. Use Nano Banana Pro, GPT Image, or Flux for any image with embedded text, then composite the Midjourney image as a background layer.
The third mistake is over-relying on Stylize values. Many V7 prompts used --s 750 or higher to force a painterly look. V8.1's default aesthetic is already strong; you rarely need to push Stylize above 250. Higher values now introduce more noise than style.
How should you build a V8.1 workflow this week?
Spend 30 minutes setting up two things and your output quality jumps immediately. First, build one moodboard per brand or content series you work on, with 8 to 12 reference images per board. Save the sref codes. Second, write down your five-part prompt template (subject, action, environment, lighting, parameters) and reuse it.
After those two steps, run a single-brief A/B test: same prompt with Raw on versus Raw off, same prompt with Stylize 100 versus 500, same prompt with and without your new sref. Within 10 generations you will know exactly which combination produces your brand's house look. That decision then becomes your default prompt structure for the next 100 images.
Midjourney V8.1 is not a small upgrade. The combination of HD by default, stable srefs, working Omni Reference, and natural-language parsing changes what a single skilled operator can produce in a day. Practitioners who rebuild their prompt habits this month will compound that advantage every week. 懂AI,更懂你 UD相伴,AI不冷.
🚀 Ready to Build a Production-Grade AI Visual Workflow?
Knowing the features is one thing. Wiring Midjourney V8.1 into a repeatable workflow that produces 50 to 100 on-brand images a week is another. UD has 28 years of experience helping Hong Kong teams adopt new tools without breaking what already works. We'll walk you through every step, from moodboard design to prompt templates to team handoff.