There is a parameter in Midjourney V8.1 called --stylize. It accepts any value from 0 to 1000. Most tutorials still tell you to crank it to 750 or higher for "more artistic" results. In V8.1 Alpha, that advice produces worse images, not better ones. The model rewards a much narrower band, and once you understand that, your output quality jumps in a single render.
V8.1 launched on April 14, 2026, and it changed enough things under the hood that habits from V6 and V7 quietly stopped working. This article gives you five tips that actually improve V8.1 output, including the parameters most people set wrong and the ones almost nobody uses.
What Is New in Midjourney V8.1?
Midjourney V8.1 Alpha launched on April 14, 2026 with default HD output, generating 2K images without manual upscale, and a refreshed Style Reference system covering --sv 1 through --sv 4 versions. HD mode is roughly 3x faster and 3x cheaper than V8 Alpha, while standard resolution is around 50% faster. The model is sharper and more responsive to specific prompts but less forgiving of vague ones.
V8.1 also introduces deeper integration with the Style Creator tool, which lets you generate reusable --sref codes by selecting from a visual grid. The result: more consistency across image series, but only if you stop relying on the V7 prompting habits that V8.1 has started to penalise.
Tip 1: Keep --stylize Between 100 and 400, Not Higher
For V8.1 Alpha, the practical sweet spot for --stylize is 100 to 400. Anything above 500 introduces visual noise, over-saturated colours, and surreal distortions that look like AI rather than a deliberate creative choice. The default is 100, and many of your best images will live between 200 and 350.
This is a real shift from V7, where pushing --stylize 750 to 1000 was a common power move for getting more artistic flair. In V8.1, the model already applies more interpretation by default. Adding extreme stylization on top often pushes results past the point where they still match your prompt. If your image looks "too AI," lower the stylize value before changing anything else in the prompt.
Try this comparison: run the same prompt with --stylize 100, --stylize 300, and --stylize 800. In V8.1 Alpha, the first two are usually the keepers. The third often looks dreamlike in a way that breaks the brief.
Tip 2: Use --style raw When Polish Is the Problem
Add --style raw to your prompt when you want grittier, more authentic results that escape the default Midjourney "polished glamour" look. It strips away the automatic beautification layer and gives you images that feel closer to documentary photography or candid editorial shots. This is essential for product photography, news visuals, and any brief where realism matters more than fantasy.
Without --style raw, V8.1 tends to add cinematic lighting, soft skin smoothing, and hyper-saturated colours to almost any portrait or scene. That looks great for fantasy art and album covers. It looks wrong for a corporate headshot, a real-estate listing, or any image meant to feel honest.
A practical workflow: draft your prompt without --style raw first, see what the default polish does, then re-run with --style raw if the result feels too magazine-cover. The two versions side by side will tell you immediately which the brief actually needs.
Tip 3: Replace Style Reference Images with Numeric --sref Codes
A numeric --sref code (like --sref 3986738193) calls a preset visual style from Midjourney's internal library and produces far more consistent results than uploading a reference image. Image-based style references vary by upload quality, lighting, and crop. Numeric codes are deterministic, repeatable across teams, and produce predictable looks for an entire image series.
Power users keep a personal library of 10 to 20 numeric codes that match their common briefs: one for editorial portraits, one for product hero shots, one for mood boards, one for retro 80s, one for minimalist studio. Sites like sref-midjourney.com and promptsref.com let you browse codes by category before locking one in.
To control intensity, pair the code with the --sw parameter, which sets style strength from 0 to 1000. Start at --sw 100. If the style is too dominant and is overriding your subject, lower it. If the result looks too generic, raise it. You can also use --sref random to discover new styles, then capture the resolved code from the output for reuse.
Tip 4: Build a Reusable Brand Look with Style Creator
Style Creator inside Midjourney V8.1 lets you generate a custom --sref code by picking and rejecting images from a visual grid. The output is a reusable style code that locks in your aesthetic across every future image, which is the missing piece for anyone running a content series, brand account, or product catalogue.
The process takes about ten minutes the first time. You open Style Creator, scroll through grids of generated images, click the ones that match your taste, and explicitly reject the ones that do not. The tool uses both signals to triangulate a style code that captures the visual DNA you have selected. Save that code in your password manager or notes app.
Once you have your code, every prompt becomes shorter. Instead of writing "warm cinematic colour grading, soft natural light, shot on Kodak Portra 400, slight grain, editorial composition," you write your subject and append your saved code. Output stays consistent across hundreds of images, which is the difference between content that looks like one brand and content that looks like ten different freelancers.
Tip 5: Reference Real Photographers Instead of Vague Descriptors
V8.1 responds far better to named photographers, directors, and visual artists than to abstract style words like "cinematic" or "moody." Names carry compressed meaning. "Annie Leibovitz portraiture" is a single phrase that encodes lighting style, composition philosophy, subject framing, and tonal grading. "Cinematic portrait" tells the model nothing specific.
The same logic applies across genres. "Roger Deakins cinematography" anchors a film-look palette better than "warm tones." "Wes Anderson centred composition" gives you a specific framing rule. "Helmut Newton fashion editorial" carries decades of compositional and lighting language in three words.
Combine names with --style raw to keep the look grounded. A prompt like "a CFO in a glass-walled boardroom, mid-afternoon light, Annie Leibovitz portraiture, --style raw --stylize 250" consistently outperforms "a CFO in a boardroom, professional, cinematic, dramatic lighting" in V8.1.
Try This Prompt Right Now
Copy this template, swap in your own subject and reference style, and run it in Midjourney V8.1 Alpha. The structure is built around the five tips above so you can feel each parameter doing its job:
Template:
[your subject in 8 to 15 specific words], [setting and mood], [photographer or director name + visual genre], shot on [film stock or camera if relevant], [lighting description] --style raw --stylize 250 --sref [your saved code or random] --sw 100 --v 8.1
Worked example:
a Hong Kong food stall owner in his sixties, behind a crowded dai pai dong on a humid summer night, candid documentary portrait in the style of Steve McCurry, neon and incandescent mixed lighting, shot on Kodak Portra 400, slight motion blur in the background --style raw --stylize 280 --sref random --sw 100 --v 8.1
Run that exact prompt twice. Then try the same brief without --style raw. The difference will show you why V8.1 is more responsive than V7 once you stop fighting its defaults.
Common Mistakes That Kill V8.1 Output
Three mistakes consistently destroy Midjourney V8.1 results: stacking too many style references, prompting in vague adjectives instead of concrete nouns, and forgetting to set the version flag. Each one degrades quality even when the rest of your prompt is well-built. Fixing these three things alone will measurably improve your output before you touch any of the five tips above.
The first mistake is stacking three or more style references at high strength. V8.1 tries to blend them all and produces a muddled image where no single style wins. Cap yourself at one --sref code at a time, or two with --sw values that clearly favour one over the other.
The second mistake is leaning on adjectives like "beautiful," "stunning," or "amazing." These words carry zero compositional information for the model. Replace every adjective with a concrete noun or verb: "low-angle shot," "morning fog rolling off the water," "subject leaning forward into the light." Concrete prompts produce concrete images.
The third mistake is forgetting --v 8.1 at the end of your prompt while V8.1 is still in alpha. If you skip the version flag, you may render in an older default model and never see what V8.1 actually does. Always specify the version explicitly during the alpha period.
When V8.1 Still Falls Short
Even with all five tips, Midjourney V8.1 still struggles with three categories of brief: anatomically precise hands and fingers in close-up, accurate text inside images, and any scene that requires multiple specific named characters interacting. These are model-level limitations, not prompt problems, and no amount of stylize tuning will fix them in a single render.
For text inside images, generate the image without text and add typography in Figma, Photoshop, or Canva afterwards. For complex hand poses, generate at a wider crop where hands are smaller, then upscale. For multi-character scenes, build them in two passes: generate one character at a time and composite. These workarounds are faster than fighting the model.
The honest truth is that no AI image tool in 2026 is one render away from a finished asset. Midjourney V8.1 gets you 80% there in a fraction of the time, and the remaining 20% is still a human craft skill. That is a productivity multiplier, not a magic button. 懂AI的冷,更懂你的難 , UD 同行28年,讓科技成為有溫度的陪伴。
Ready to Build a Real AI Workflow Around This?
Midjourney is one tool in a much larger AI workflow. If you want to map out which AI tools fit which parts of your work, and stop guessing each time a new release lands, the UD team will walk you through every step, from tool selection and prompt design to building a reliable production pipeline that delivers consistent output every time.
Or test how deep your AI knowledge actually goes with the UD AI IQ Test.