The PDPO Question Your Legal Team Is Already Asking About Your AI Deployment
A Hong Kong financial services firm's head of operations approved an AI deployment six months ago. The system processes client communications to flag service issues and generate case summaries. Nobody checked whether the personal data involved was being minimised. Nobody documented which AI model was processing that data or where. Last month, a client requested access to their data records under the PDPO. The legal team discovered the AI system had no audit trail for what personal data it processed or retained. This is not a hypothetical scenario. According to the PCPD's May 2025 compliance survey, 80 percent of Hong Kong organisations are now using AI in daily operations. Fewer than a third have a documented AI governance policy in place.
The PDPO — Hong Kong's Personal Data (Privacy) Ordinance — has applied to any processing of personal data since 1996. What has changed is that the PCPD now has explicit, detailed guidance on how PDPO obligations apply to AI specifically, and the agency has indicated 2026 will be a year of more active enforcement across organisations that have been informed of the requirements.
What Is the PCPD AI Model Framework and What Does It Require?
The PCPD's "Artificial Intelligence: Model Personal Data Protection Framework," published on 11 June 2024, is the first AI-focused personal data protection framework released by any Asia-Pacific privacy authority. It is a guidance document rather than binding legislation, but practitioners at Mayer Brown and Clifford Chance have noted that demonstrating compliance with it constitutes strong evidence of PDPO compliance when organisations deploy AI involving personal data.
The framework applies to any organisation that procures, implements, or uses AI systems that involve personal data — which, in practice, means any enterprise using AI on customer records, employee data, communications, or any other information that identifies or could identify an individual.
The framework is organised around the AI lifecycle: the procurement phase, the implementation phase, and the ongoing operational phase. Each phase has specific governance expectations that the PCPD expects organisations to address before and during AI deployment, not as an afterthought once the system is already live.
What Are the Core PDPO Obligations That Apply to Enterprise AI?
Hong Kong's PDPO contains six Data Protection Principles (DPPs). Four of them create direct obligations for enterprise AI deployments in ways that are frequently misunderstood or underestimated.
DPP 1 — Purpose and collection limitation: Personal data must only be collected for a specific, explicitly stated purpose, and only to the extent necessary for that purpose. When AI systems are deployed to analyse customer data, the specific analytical purpose must be documented and the data scope must be limited to what is genuinely necessary. Feeding a broad dataset into an AI model because "more data might be useful" is a PDPO violation if that data includes personal information collected for a different original purpose.
DPP 2 — Accuracy: Personal data must be accurate and kept up to date. When an AI system makes decisions based on personal data, its outputs are only as reliable as the underlying data. The PCPD framework specifically requires organisations to validate and test AI systems to ensure they do not produce outputs based on inaccurate or stale personal data. This maps directly to the governance requirement around knowledge base maintenance discussed in any production RAG or AI deployment context.
DPP 3 — Retention: Personal data must not be kept longer than necessary. When AI systems process personal data, organisations must be able to answer a fundamental question: how long does the system retain that data, where, and under whose control? Many off-the-shelf AI tools process and temporarily cache data in ways that create unintended retention obligations.
DPP 6 — Data access and correction: Individuals have the right to request access to their personal data and to have inaccurate data corrected. When AI systems have processed someone's personal data to generate outputs (a credit assessment, a service recommendation, a performance evaluation), the organisation must be able to fulfil that access request and demonstrate which data was used. This requires audit trails that many AI deployments currently do not have.
The Five AI Governance Measures Hong Kong Enterprises Should Implement Now
Based on the PCPD framework and legal analysis from Clifford Chance and A&O Shearman, the following five measures represent the minimum viable AI governance posture for any Hong Kong enterprise deploying AI on personal data in 2026.
Measure 1 — Establish an AI governance policy with board-level sign-off: The PCPD framework requires organisations to have an organisation-level AI strategy that specifies which purposes AI may be used for, who is responsible for AI deployment decisions, and how AI use is monitored. This policy must be documented and approved at a governance level that can be demonstrated to regulators. A departmental memo is not sufficient.
Measure 2 — Conduct a Data Protection Impact Assessment (DPIA) before any AI deployment involving personal data: The PCPD explicitly recommends DPIA as a preparatory requirement for AI that processes personal data. The DPIA identifies what data is being used, what risk the AI deployment creates, what mitigation measures are in place, and what residual risk remains. Running a DPIA before deployment is significantly less expensive than addressing a PCPD inquiry after.
Measure 3 — Implement data minimisation by design: Every AI system that processes personal data should be configured to use the minimum necessary personal data for its stated purpose. This means actively reviewing what data is fed into AI models and stripping out fields that are not necessary for the AI's function. It also means reviewing whether AI outputs that contain derived personal data are appropriately secured and retained.
Measure 4 — Maintain an AI processing registry: Enterprises should maintain a documented record of every AI system that processes personal data, including what data it uses, where it is hosted, which vendor supplies it, what retention periods apply, and who is the internal owner. This registry is the foundation of any response to a PCPD inquiry, a data subject access request, or an internal audit.
Measure 5 — Establish vendor due diligence for AI procurement: The PCPD framework places obligations on organisations that procure AI from third-party vendors, not just those that build their own. Before signing an AI vendor contract, enterprises should verify how the vendor handles personal data, whether the vendor's infrastructure is hosted in jurisdictions with adequate data protection, what happens to data if the contract ends, and whether the vendor can support DPIA documentation and audit requests. These questions belong in procurement contracts, not as a post-signature afterthought.
What About Agentic AI? The PCPD's Expanded Guidance
The use of agentic AI — systems that can autonomously execute multi-step tasks across multiple data sources without human intervention for each action — raises additional PDPO considerations that the PCPD has begun to address explicitly. Freshfields' 2025 analysis of the PCPD's evolving guidance notes that agentic AI creates heightened risks around data minimisation and purpose limitation, because autonomous agents often access broader data contexts than a human reviewer would in a manual workflow.
For enterprise leaders deploying or evaluating agentic AI in 2026, the governance principle is clear: agentic systems that access personal data must be constrained to the minimum data scope necessary for each specific task, with access controls that prevent the agent from accessing personal data outside its defined operational scope. Governance frameworks written for conventional AI deployments may need to be revisited when agentic capabilities are introduced.
How to Present AI Governance to Your Board in 2026
For department heads and IT leaders who need to secure board-level endorsement for an AI governance programme, the framing that works in Hong Kong's regulatory environment in 2026 is straightforward: this is not compliance overhead. It is risk management.
The PCPD has explicitly stated that 2026 will involve more active enforcement. The enforcement tools available under the PDPO include investigation, enforcement notices requiring specific compliance actions, and in serious cases, criminal prosecution. Beyond regulatory risk, the reputational and commercial damage from a public PCPD enforcement action involving AI and personal data is significant in a market as relationship-driven as Hong Kong's.
Presenting AI governance as a risk management investment, with a clear framework, a documented DPIA process, and vendor due diligence standards, gives the board the control structure it needs to approve expanded AI investment with confidence. The alternative — deploying AI at scale without governance and addressing compliance after a complaint is filed — is the more expensive path by a considerable margin.
Building AI Governance That Lasts: The UD Approach
AI governance in Hong Kong is not a one-time compliance exercise. As AI capabilities expand, as agentic systems become mainstream, and as the PCPD's enforcement posture evolves, the governance frameworks enterprises build today must be designed to adapt. The organisations that invest in a structured, documented AI governance approach in 2026 are building an institutional capability, not just satisfying a checklist.
UD has been working with Hong Kong enterprises on technology governance questions for 28 years. The intersection of AI capability and regulatory obligation is exactly the kind of territory where local knowledge, long-term relationships, and enterprise technology depth matter. With UD, AI works for you — not the other way around. That means AI that is not just capable, but compliant, governable, and built on a foundation your legal team and your board can stand behind.
AI governance does not need to be built from scratch on your own. UD's team will walk you through every step — from PDPO gap assessment and DPIA support to AI governance policy development and vendor due diligence frameworks, with 28 years of Hong Kong enterprise experience informing every recommendation.