The Data Point That Should Change How You Think About Your AI Budget
According to IBM's own CEO study, only 25% of enterprise AI initiatives deliver expected ROI — and just 16% have achieved enterprise-wide scale. Writer's 2026 Enterprise AI Survey, covering executives across industries, found that 97% of executives reported some benefit from AI, but only 29% saw significant ROI from generative AI and only 23% from AI agents. The technology is being bought. The returns are not being realised at scale. At IBM Think 2026 on May 5, IBM's response was to propose that the problem is not the technology — it is the absence of an AI Operating Model.
This article explains what that model is, what IBM announced at Think 2026 to support it, and how enterprise leaders in Hong Kong should use it as a diagnostic framework for their own AI programmes.
What Is an AI Operating Model?
An AI Operating Model is an integrated architecture that connects four enterprise systems — Agents, Data, Automation, and Hybrid infrastructure — to allow AI to operate consistently, accountably, and at scale across a business. The term comes from IBM's Think 2026 announcement, where CEO Arvind Krishna argued that deploying AI tools without this underlying model is why most enterprise AI programmes produce pilots rather than transformation.
The core insight is structural. Most organisations deploy AI capability — a large language model here, a document processing workflow there, a customer-facing chatbot somewhere else — without a shared data layer, without consistent governance, and without the automation infrastructure to connect outputs to business processes. The AI Operating Model treats these four components as interdependent, not optional add-ons to each individual AI project.
Think of it the way a VP of Operations would think about quality management: a well-designed quality system is not a layer you add on top of manufacturing. It is built into every step of the production process from the start. The AI Operating Model applies the same logic to enterprise AI deployment.
Why Do Most Enterprise AI Programmes Fail to Scale?
The failure pattern is consistent across industries. An enterprise deploys a promising AI pilot in one department. The results are encouraging. Then the programme stalls when the organisation attempts to replicate it across business units. The reasons are almost always the same: data is siloed and inaccessible, governance is undefined, workflows are manual, and the infrastructure is inconsistent across environments.
Deloitte's 2026 State of AI in the Enterprise report found that only 25% of respondents had moved 40% or more of their AI experiments into production, while 54% expected to reach that level within three to six months. The gap between expectation and execution has been persistent for two years. IBM's argument at Think 2026 is that this gap will not close by deploying better models. It will close by building better operating infrastructure.
The specific problem IBM identifies is what it calls the AI divide — a widening performance gap between enterprises that have invested in the full operating model and those that continue to treat AI as a collection of individual tools. According to IBM, organisations without a coherent operating model are not just slower to deploy AI; they are increasingly unable to make AI work reliably enough to scale into regulated or mission-critical workflows at all.
What Are the Four Pillars of the IBM AI Operating Model?
The first pillar is Agents — coordinated AI that executes and adapts across business processes. IBM's view is that AI agents are the execution layer of the operating model. They are not standalone automation tools; they are the interface between the enterprise's AI intelligence and its operational reality. The next generation of IBM watsonx Orchestrate, announced at Think 2026 as moving to private preview, evolves this pillar into an agentic control plane that can deploy agents from any source — IBM, Anthropic, OpenAI, custom builds — with consistent policy enforcement and accountability.
The second pillar is Data — real-time, connected information that gives all parts of the organisation a shared view of what is happening. IBM announced Context in watsonx.data (also moving to private preview), which extends its data platform with an open, federated context layer that applies semantic meaning to enterprise data, enforces governance at runtime, and makes AI decisions explainable. For CIOs managing legacy data architectures, this is the component that determines whether AI can access the data it needs reliably — not as a one-time data migration project, but as a continuous, governed connection.
The third pillar is Automation — end-to-end infrastructure and automated workflows that scale AI outputs across business processes. Without this pillar, AI produces insights that humans must manually act on. The automation layer connects AI outputs directly to business systems: ERP entries, approval workflows, communications, reporting. IBM Concert, announced at Think 2026, targets this pillar specifically — intelligent operations that close the loop between AI analysis and operational execution.
The fourth pillar is Hybrid — operational independence for sovereignty, governance, and security. IBM Sovereign Core, announced at Think 2026, addresses a concern that is acutely relevant to Hong Kong enterprises: the ability to run AI consistently and with control across on-premises, private cloud, and public cloud environments, without being locked into a single vendor's infrastructure or jurisdiction. For regulated industries — financial services, healthcare administration, legal — the hybrid pillar is not optional.
What Is watsonx Orchestrate and Why Does It Matter for Multi-Vendor Enterprise AI?
watsonx Orchestrate is IBM's agentic control plane for multi-agent enterprise AI. In its next-generation form announced at Think 2026, it allows organisations to deploy agents built on any AI platform — IBM models, Claude from Anthropic, GPT from OpenAI, or internally built models — with consistent policy enforcement, governance, and accountability regardless of which underlying model is running.
For enterprise IT Directors managing AI deployments across multiple business units, this has a specific practical implication. Today, most multi-vendor AI deployments have inconsistent governance: the Claude-based deployment has one set of guardrails, the GPT-based deployment has another, and the internally-built tool has a third. watsonx Orchestrate proposes to solve this by acting as a unified governance and orchestration layer above the individual AI systems.
The private preview status means enterprise teams cannot deploy watsonx Orchestrate at full capability today. But it signals the direction of enterprise AI infrastructure investment: away from individual model procurement and toward platform-level orchestration and governance. IT Directors evaluating their 2026 AI infrastructure roadmap should treat this as a significant signal about where enterprise AI vendor competition is heading.
What Is IBM Sovereign Core and Why Does It Matter for Hong Kong Enterprises?
IBM Sovereign Core addresses the operational independence dimension of enterprise AI: the ability to run AI with consistent controls across environments regardless of where the compute or data resides. For Hong Kong enterprises, this has three specific dimensions worth unpacking.
First, data residency. Hong Kong's Personal Data (Privacy) Ordinance (PDPO), combined with sector-specific regulations from the HKMA for financial institutions and the Office of the Privacy Commissioner for Personal Data, requires organisations to maintain control over where personal data is processed and stored. An AI operating model that runs entirely in a single global cloud provider's infrastructure may create residency compliance risks that a hybrid architecture avoids.
Second, vendor dependency risk. Organisations that have committed their entire AI infrastructure to a single provider face operational concentration risk — if the provider changes pricing, terms, or service availability, the organisation has limited options. A sovereign hybrid architecture preserves the ability to migrate workloads or run parallel environments.
Third, audit and explainability requirements. Regulated industries in Hong Kong require the ability to explain AI-assisted decisions to regulators. An architecture that gives the organisation complete visibility into how AI decisions were made — and the ability to produce that audit trail on demand — is a governance requirement, not an optional feature.
How to Use the AI Operating Model as a Diagnostic Framework
The most immediate practical value of the IBM AI Operating Model framework is as a diagnostic tool for an AI programme review. Apply it as four questions to ask about your current enterprise AI posture.
Agents pillar: Do your AI agents operate with consistent governance regardless of which model is running? Or does governance vary by deployment? If the answer is "it varies," you have a Agents governance gap.
Data pillar: Can your AI systems access the enterprise data they need in real-time, with semantic understanding, and with runtime governance? Or does each AI project require its own data integration project? If each project needs its own data pipeline, you have a Data architecture gap.
Automation pillar: Do AI outputs connect directly to business processes and systems of record? Or do humans need to manually interpret and act on AI recommendations? If the answer is largely manual, you have an Automation integration gap that is capping the ROI of every AI investment you make.
Hybrid pillar: Can you run AI consistently across your on-premises, private cloud, and public cloud environments with full governance? Or is your AI infrastructure locked to a single environment? If locked, you face both sovereignty risk and vendor dependency risk.
Organisations with gaps across all four pillars are not positioned to scale AI. The AI Operating Model does not prescribe a single technology solution — it provides a framework for identifying where the scaling blockers actually are.
The Strategic Takeaway for Enterprise Leaders
IBM Think 2026 made the argument, backed by its own research and corroborated by Deloitte's findings, that enterprise AI failure is a structural problem — not a technology problem. Most organisations are spending on AI capability without building the operating infrastructure that allows that capability to scale.
The AI Operating Model framework — Agents, Data, Automation, Hybrid — is useful precisely because it is platform-agnostic. It does not require IBM's products to implement. It is a checklist for where the structural gaps are in any enterprise AI programme. The question for Hong Kong enterprise leaders is not whether the framework is correct. It almost certainly is, because the failure patterns it describes are universally recognisable. The question is which gaps to address first given your organisation's current maturity, regulatory environment, and available budget.
懂AI,更懂你。UD 同行 28 年,協助香港企業在每一個技術周期中找到務實的前進路徑——從診斷到架構,從試點到生產部署。
Ready to diagnose your organisation's AI Operating Model gaps? The UD team will walk you through every step — from readiness assessment to architecture planning and production deployment.