

( 点击蓝字,关注我们 )。
The PCPD Issues Alert over the Privacy Risks of OpenClaw and Agentic AI and Reminds Organisations and the Public to Use AI Safely
香港私隐专员公署关注 OpenClaw 及代理式 AI 的私隐风险
提醒机构及市民安全使用 AI


Introduction
1


Hong Kong's Office of the Privacy Commissioner for Personal Data issued an alert raising concerns over the use of agentic AI tools and their potential data processing practices when allowing the tools to access certain files and personal information. The PCPD recommended users "continuously assess the risks involved in using agentic AI and watch out for any request by the agentic AI to execute high risk operations. If the decisions made by agentic AI are likely to have a significant impact on individuals, users should consider adopting a 'human-in-the-loop' approach to retain the final control in decision-making processes."
香港个人资料私隐专员公署发出警示,对代理式人工智能(AI)工具在获准访问特定文件及个人信息时的数据处理行为及潜在风险表示关注。公署建议用户:“持续评估使用代理式 AI 所涉及的风险,并留意该 AI 提出的任何高风险操作请求。如代理式 AI 作出的决策可能对个人造成重大影响,用户应考虑采用‘人在环中’模式,在决策流程中保留最终控制权。”

PR Details
2


The Office of the Privacy Commissioner for Personal Data (PCPD) noted that the security risks related to the use of OpenClaw and other agentic artificial intelligence (AI) have provoked discussion recently. The PCPD is also concerned about the matter and reminds organisations and members of the public that before deploying or using OpenClaw and other agentic AI, they should pay attention to and understand the personal data privacy and security risks involved to avoid personal data breaches, malicious system takeovers and cybersecurity threats. They are also reminded to adopt adequate and effective security measures to safeguard personal data privacy.
个人资料私隐专员公署(私隐专员公署)留意到,近期有关使用 OpenClaw 以及其他代理式人工智能(Agentic AI)的安全风险引起讨论,私隐专员公署亦关注事件,公署提醒机构及市民在部署或使用 OpenClaw 以及其他代理式 AI 前,应留意并了解所涉及的个人资料私隐及安全风险,慎防个人资料外洩、系统被恶意接管,以及网络安全风险,并采取充足及有效的安全措施,以保障个人资料私隐。
The PCPD pointed out that compared to AI chatbots, which are generally used for text replies, content summary or content generation, agentic AI is more versatile in terms of functionality. Agentic AI is usually an agentic AI tool with high-level access that can be deployed on local device or server. It can read and write local files, allocate system resources, handle external services, or even autonomously act on behalf of the user to execute tasks with multiple steps according to pre-defined workflow, such as handling emails, making restaurant reservations and settling payments. The relevant processes do not require real time involvement of users.
私隐专员公署指出,代理式 AI 与一般主要用于文字回复、内容总结或生成的 AI 聊天机械人相比,用途更为广泛。代理式 AI 通常为一个可部署于本机装置或伺服器上的高权限 AI 代理工具,能够读写本地档案、调用系统资源、操作外部服务,甚至可按预设流程代替用户自主执行多步骤任务,例如处理电邮、餐厅订座、缴交费用等,有关过程毋须用户即时参与。
Therefore, from the perspective of protecting personal data privacy, agentic AI generally poses higher risks than ordinary AI chatbots.For instance:
因此,从保障个人资料私隐角度,代理式 AI 的相关风险比一般 AI 聊天机械人更高。例如:
The default access right of agentic AI is generally higher than that of AI chatbots, allowing it to access files, emails, account credentials of devices and contents saved in browsers, etc. If the settings of the relevant access rights lack stringent restrictions, the agentic AI may access a vast amount of the personal data of users or other individuals, resulting in increased risks of unauthorised access or reproduction of personal data by third parties, and even data breaches. At the same time, agentic AI may also misinterpret the commands from users and mistakenly delete their important data, such as mistakenly deleting all email records of the users;
代理式 AI 的预设存取权限一般较 AI 聊天机械人为高,可存取使用者装置上的档案、电邮、帐户凭证及浏览器内储存的内容等。若相关权限设定欠缺严格限制,代理式 AI 可能接触到大量涉及用户或其他人士的个人资料,增加第三方未经授权查阅、复制以至外洩个人资料的风险。同时,代理式 AI 亦可能因错误理解用户指令,而误删用户重要资料,例如误删用户所有电邮纪录;
If there are any vulnerabilities in the system design or safety control on these agentic AI with high-level access and access to multiple systems and data sources, it will pose significant risks to personal data privacy and data security as a whole; and
这类具高权限、可接触多个系统及资料来源的代理式 AI ,一旦在系统设计或安全控制上出现漏洞,将对个人资料私隐及整体资料保安构成重大风险;及
If the agentic AI allows users to install Plugins or Skills, and some of the Plugins or Skills have not undergone rigorous security reviews, malicious codes might be embedded in those Plugins or Skills. Hackers may then exploit the vulnerability (ies) to gain unauthorised access and take over user accounts, or further take control of the entire computer system, leading to leakage of personal data or other sensitive data.
若代理式 AI 容许用户安装各类 Plugins 或 Skills,而当中有程式并未经过严格的安全审核,相关程式可能夹带恶意程式码。黑客可借此入侵并接管用户帐户,甚至进一步控制整个电脑系统,从而导致个人资料及其他敏感资料外洩。
The PCPD suggests that when collecting, using and processing personal data with agentic AI, organisations and members of the public should pay particular attention to the followings:
私隐专员公署建议机构或市民在使用代理式 AI 收集、使用及处理个人资料时,特别留意以下几点:
Grant the minimum access right to agentic AI: Users should carefully consider the nature and sensitivity of the personal data involved. Do not provide your personal data to agentic AI arbitrarily, especially when this involves confidential or sensitive personal data, such as identification documents, bank account numbers and passwords. Only the minimum access rights necessary to complete the tasks should be granted to agentic AI. Avoid granting administrator account rights to AI;
授予代理式 AI 最小权限:用户应小心考虑所涉及个人资料的性质及敏感性,不应随便向代理式 AI 提供个人资料,特别是机密或敏感性的个人资料(例如身分证明文件、银行户口号码及密码等)。应只授予代理式 AI 完成任务所需的最小权限,避免授予 AI 使用管理员帐号的权限;
Use the latest official version: Users should only download the latest versions of agentic AI from official channels and should avoid using third-party versions or outdated versions to reduce the risks of data breach incidents arising from unpatched system vulnerabilities;
使用官方最新版本:用户应从官方渠道下载代理式 AI 的最新版本,避免使用第三方版本或陈旧版本,以减低因为系统漏洞未被修补而发生资料外洩事故;
Adopt adequate measures to ensure system security and data security, such as separating the runtime environment of agentic AI from local devices or servers, strengthening network controls, strictly managing Internet-facing surfaces, lowering access rights and establishing effective protection mechanisms;
采取足够措施确保系统安全及资料安全,例如将运行环境隔离于本机装置或伺服器,加强网络控制,严格控制互联网暴露面,降低权限,建立有效的防护机制;
Install and use Plugins or Skills with caution: Verify that the relevant programmes are the official versions to ensure their security; review the programmes to check if malicious codes are embedded and refrain from using them if their security cannot be ascertained; and
审慎安装及使用 Plugins 或 Skills:核实相关程式为官方最新版本,以确保程式安全性;审视程式有否恶意代码,如不确定程式的安全性,应避免使用;及
Conduct continuous risk assessments: Users should continuously assess the risks involved in using agentic AI and watch out for any request by the agentic AI to execute high risk operations. If the decisions made by agentic AI are likely to have a significant impact on individuals, users should consider adopting a “human-in-the-loop” approach to retain the final control in decision-making processes, such as transmission of data and modification of system configurations.
持续评估风险:用户应持续评估所涉及的风险,留意代理式 AI 是否要求执行高风险操作,若代理式 AI 的决定很可能对个人造成重大影响时,用户便应考虑采取「人在环中」的策略,在发送数据、修改系统配置等决策过程中保留作出决定的最终控制权。
Organisations can refer to the guidance titled “Artificial Intelligence: Model Personal Data Protection Framework” (Model Framework) published by the PCPD when collecting, using and processing personal data with AI tools. The Model Framework reflected international prevailing norms and best practices, including recommendations on formulating policies and frameworks on AI governance with a view to enhancing the protection of personal data privacy and complying with the relevant requirements of the Personal Data (Privacy) Ordinance.
机构在使用 AI 工具收集、使用及处理个人资料时,可参考私隐专员公署发布的《人工智能 (AI):个人资料保障模范框架》(《模范框架》),《模范框架》的建议反映国际间认受的规范及最佳行事常规,包括订定 AI 管治的政策及框架,以加强保障个人资料私隐,遵从《个人资料(私隐)条例》的相关规定。

Artificial Intelligence: Model Personal Data Protection Framework:
https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf
《人工智能 (AI):个人资料保障模范框架》:https://www.pcpd.org.hk/tc_chi/resources_centre/publications/files/ai_protection_framework.pdf


References
PCPD PR:
英文:
https://www.pcpd.org.hk/english/news_events/media_statements/press_20260316.html
中文:
https://www.pcpd.org.hk/sc_chi/news_events/media_statements/press_20260316.html




END
想入群交流数据合规或者法律英语的小伙伴请在文章下面留言或给公众号留言。有时候看消息比较慢,请见谅。
欢迎大家点赞评论转发~~~
夜雨聆风