Here is a fact that should keep every small business owner using AI awake at night. In 2024, an Air Canada chatbot invented a bereavement refund policy that did not exist. When the customer claimed the refund, Air Canada argued the chatbot was a "separate legal entity" responsible for its own statements. The court disagreed. Air Canada was held legally liable for what its AI told a customer. The case set a precedent: if your AI confidently makes things up, you pay the bill. That confident invention has a name. It is called AI hallucination, and most Hong Kong SMEs have no idea how often it happens inside the tools they already use.
What Is an AI Hallucination?
An AI hallucination is when a large language model generates information that sounds plausible and confident but is factually wrong, fabricated, or completely invented. The model is not "lying" in the human sense. It is predicting the next most likely sequence of words based on patterns, with no built-in mechanism to verify whether what it says is actually true.
A 2026 Suprmind benchmark study of leading AI models found hallucination rates ranging from 3% on the best models to over 27% on consumer chat tools for factual questions, depending on the topic. The same study showed AI models are 34% more likely to use words like "definitely" and "certainly" when generating incorrect information than when stating verified facts.
Why Do AI Models Hallucinate?
AI models hallucinate because they are pattern-completion machines, not fact-retrieval systems. When a question falls outside what the model was trained on, or when training data contained errors, the model still tries to produce a fluent answer. The fluency masks the gap in knowledge.
The three main causes:
1. Training data gaps: The model never saw the answer, so it interpolates from related patterns. A model trained mostly on global content may invent local Hong Kong details.
2. Outdated information: Most models have a knowledge cutoff date. Anything that changed after that date — prices, policies, personnel — is still "remembered" in the old form.
3. Over-confident generation: Models are tuned to be helpful. Saying "I do not know" is technically a failure of the reward signal during training, so models are biased toward generating any answer over no answer.
How Bad Can Hallucinations Get for a Small Business?
Hallucinations can range from harmless small errors to lawsuits and brand damage. The Air Canada case set the legal precedent in 2024. In 2025 and 2026, similar cases have emerged across the US, UK, and EU. A 2026 PwC survey of business leaders found that 9% of AI initiatives delivered negative ROI, with hallucination-driven mistakes a leading cause.
Real scenarios that hit SMEs:
1. A customer service bot promises a refund policy your shop does not offer. Under Hong Kong consumer law, the published terms are binding. You may have to honour what the bot said.
2. A sales chatbot quotes a discount that was never approved. Customers screenshot the conversation and demand the price.
3. An AI writing tool drafts marketing copy citing a competitor's product feature your team adds verbatim. The competitor's lawyers send a cease and desist.
4. An AI-generated FAQ on your website mentions a regulatory licence you do not actually hold. A regulator finds it during a routine sweep.
5. An AI agent books an appointment at a time your business is closed. Three customers show up to a locked door, leaving negative reviews.
How Can a Hong Kong SME Tell If Their AI Is Hallucinating?
The most reliable signal of an AI hallucination is when the output is highly specific but unverifiable. Watch for invented names, fake citations, specific numbers without a stated source, and dates that fall outside the model's known training window.
Five warning signs to check before publishing or acting on AI output:
1. Specific statistics with no source: "73% of Hong Kong SMEs use AI" without a study name or year is suspect. Real statistics have authors and dates.
2. Book or article titles you cannot find: Google the exact title in quotes. If nothing comes up, the model invented it.
3. Quotes from named people: Search the exact quoted phrase. Real quotes show up in interviews, press releases, or news. Invented ones do not.
4. Hong Kong-specific facts that contradict your knowledge: If the model claims a local rule, regulation, or company detail that "feels off", verify it before passing it on.
5. Confident tone on a niche question: The riskier the topic, the more sceptically you should treat a confident answer.
How Can SMEs Prevent AI Hallucinations in Daily Operations?
The most effective prevention is to ground the AI in your verified business data, not let it draw from its general training memory. This is called retrieval-augmented generation, or RAG. Combined with a human review step on customer-facing output, hallucination risk drops to near zero for routine tasks.
Four practical safeguards:
1. Connect the AI to your real data. Instead of asking the model "What is our return policy?", connect it to your actual policy document. RAG makes the model quote the source rather than guess.
2. Set guardrails on what the AI can say. Define a list of approved topics, prohibited claims, and required disclaimers. Most enterprise AI platforms support this out of the box.
3. Require a human in the loop for high-stakes output. Marketing copy, legal-adjacent statements, refund decisions, and personalised quotes should be reviewed before a customer sees them.
4. Log every AI response. If something goes wrong, you need the conversation history to investigate, retrain, and defend.
Frequently Asked Questions
Will newer AI models stop hallucinating? Hallucination rates are dropping with each generation, but they have not gone to zero. OpenAI, Anthropic, and Google all acknowledge that some hallucination is intrinsic to how large language models work today.
If I use Claude or ChatGPT for writing, am I at risk? Risk is low for purely creative content. Risk rises sharply when the AI cites statistics, names people, makes claims about regulations, or speaks on behalf of your brand to a customer.
Does using a paid enterprise plan reduce hallucinations? Paid plans often include features like data grounding, custom instructions, and document retrieval. These reduce hallucinations significantly. Free consumer tiers offer fewer safeguards.
Who is liable if the AI hallucinates? Based on the Air Canada precedent and similar 2026 rulings, the business deploying the AI is generally liable, not the AI vendor. Read your terms of service carefully.
Conclusion: Trust, But Verify
AI is one of the most useful tools a Hong Kong SME can deploy in 2026. It is also one of the most confidently wrong. The combination is dangerous when the output goes straight to a customer without a check. The fix is not to abandon AI. The fix is to treat AI output the way a smart manager treats a junior employee's first draft. Useful, fast, but not final until someone with judgement looks at it.
The businesses that thrive with AI in the next three years will be the ones who set up guardrails early, ground their AI in real data, and keep a human in the loop where it counts. The ones who do not will be the ones writing apology emails after the next Air Canada moment.
UD has accompanied Hong Kong SMEs for 28 years. We understand AI, and we understand you. With UD by your side, AI is no longer cold technology.
Ready to Make Your AI Trustworthy?
Now that you understand how AI hallucinations happen and where they hit hardest, the next step is auditing your current AI use and building safeguards into your daily operations. The UD team will walk you through every step, from spotting risky AI workflows, to grounding your AI in verified data, to setting up human-in-the-loop review where it matters.