The Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong conducted compliance checks on 60 local organizations across a wide range of sectors to study their use of Artificial Intelligence (AI) and its impact on personal data privacy, with the aim to monitor compliance with the Personal Data (Privacy) Ordinance (“PDPO”) in particular with respect to the collection, use, and processing of personal data when AI systems are involved. The PCPD also evaluated the adoption of the 2024 “Artificial Intelligence: Model Personal Data Protection Framework” and the governance structures around AI usage within these organizations.
The findings revealed that 80% of the organizations used AI in their daily operations, which represented a 5% increase from the previous study, with over half using three or more AI systems in customer service, marketing, administrative support, compliance, and research areas. Half of these organizations collected or used personal data through AI systems while provided sufficiently clear personal data collection statements to data subjects. Most organizations retained the data only as long as necessary and employed robust data security measures including access control, encryption, anonymization, and penetration testing to protect personal data collected via AI systems.
Privacy and risk management practices were also well-implemented. Nearly all organizations using personal data for AI conducted tests to ensure system reliability and fairness, and a majority performed privacy impact assessments and risk evaluations. Most organizations have in place data breach response plans which included provisions for AI-related incidents, with a significant portion adopted a “human-in-the-loop” approach to ensure human oversight on AI decisions to mitigate errors. Internal audits and independent assessments of AI use were conducted or planned by many to ensure ongoing compliance with AI strategies and internal policies.
On governance, the majority of organizations had set up AI governance structures such as committees or appointed personnel to oversee AI use. Training regarding AI, including privacy risks, was provided to employees in 75% of the organizations reviewed. The PCPD did not find any contraventions of the PDPO during these checks but recommended continuous monitoring, privacy impact assessments, risk management, internal audits, human oversight, and engagement with stakeholders to enhance transparency and safety in AI use. The PCPD also encouraged organizations to utilize available resources e.g. the Model Framework and guidelines for generative AI to promote responsible AI development and use.
In summary, the report demonstrates a growing adoption of AI with a strong emphasis on privacy and security compliance in Hong Kong organizations. The PCPD’s proactive compliance checks and recommendations help to safeguard personal data privacy while supporting innovation and healthy AI development through governance, risk management, and transparency measures.