Co-funded by the European Union

Hong Kong: Office of the Privacy Commissioner for Personal Data adopted Artificial Intelligence guidelines

The PCPD's release of the "Artificial Intelligence: Model Personal Data Protection Framework" is timely, given the rapid advancements in AI technology and its widespread applications.

It provides internationally recognised recommendations and best practices, ensuring responsible AI use and compliance with the Personal Data (Privacy) Ordinance (PDPO).

The Model Framework has received support from the Office of the Government Chief Information Officer of the Hong Kong Government and the Hong Kong Applied Science and Technology Research Institute. The PCPD consulted various experts and stakeholders, ensuring the Framework is comprehensive and practical.

Key Aspects of the AI Data Protection Framework:

Strategic AI Governance: Employers must formulate a clear AI strategy and establish governance structures. Setting up an AI governance committee or similar body ensures that AI-related activities align with strategic goals and comply with legal and ethical standards. Providing employees with training on AI and its implications for personal data privacy is crucial to mitigate risks and foster a culture of responsibility around AI technologies.

Risk Assessment and Oversight: Conducting thorough risk assessments and implementing robust human oversight mechanisms are pivotal. Employers should adopt a risk-based management approach, evaluating potential risks posed by AI systems and implementing proportionate measures to mitigate these risks. Depending on the level of risk, varying degrees of human oversight may be necessary, involving regular audits, human decision-makers in critical processes, and clear protocols for handling AI-related issues.

Customisation and Continuous Management: The Framework underscores the need for meticulous data management and continuous monitoring of AI systems. This involves preparing and managing data, including personal data, for customisation and use of AI systems. Rigorous testing and validation of AI models during customisation and implementation ensure correct and secure functioning. Maintaining system security and protecting data integrity are ongoing responsibilities, with continuous monitoring to promptly identify and address any issues.

Effective Stakeholder Communication: Transparency and trust are critical when deploying AI technologies. The Model Framework advocates for regular and effective communication with all stakeholders, including internal staff, AI suppliers, customers, and regulators. By keeping stakeholders informed and engaged, employers can build trust and foster a collaborative environment where concerns and insights can be openly shared, enhancing the organisation's reputation and aligning with ethical principles of transparency and accountability.