Linking Professionals to Facilitate Effective Deployment of Privacy-Augmented Technology and Artificial Intelligence for the Masses
The Research Coordination Network (RCN) event, organised by the White House Office of Science and Technology Policy, recently delved into the current advancements and future directions of Privacy Enhancing Technologies (PETs). These technologies are crucial for supporting ethical, fair, and representative AI systems.
Key Technologies Discussed
The event showcased several key PETs, including differential privacy, federated learning, homomorphic encryption, secure multiparty computation (SMPC), zero-knowledge proofs, and trusted execution environments (TEE).
- Differential privacy is a technique that protects individual data by adding statistical noise. It has been applied in various contexts, such as data deletion compatibility with GDPR.
- Federated learning enables multiple parties to collaboratively train AI models without sharing raw data, thereby minimising privacy risks associated with centralising personal data.
- Fully Homomorphic Encryption (FHE) was demonstrated in a cross-border financial fraud detection use case, allowing computations on encrypted data directly, preserving privacy while enabling compliance with broad regulatory frameworks.
- Secure Multiparty Computation (SMPC) allows joint data processing without revealing private inputs from each party, useful for scenarios requiring collective insights without data pooling.
- Zero-Knowledge Proofs (ZKP) and Trusted Execution Environments (TEE) provide cryptographically sound and hardware-assisted guarantees, respectively, for validating data or computations securely without exposing sensitive information. TEEs are already deployed in real-world cases like privacy-preserving tourism statistics from telecom data in Indonesia.
Emphasis on Privacy-by-Design and Ethical AI
The event underscored the importance of embedding privacy-by-design and ethical AI principles throughout AI development lifecycles. This approach aims to coordinate technological solutions with emerging data protection laws, such as GDPR and the EU AI Act, to ensure fairness, transparency, and representative outcomes.
Shift Towards Responsible AI Stewardship
Discussions highlighted the deployment of PETs not only for data protection but also for bias mitigation and ethical compliance. This shift marks a move towards responsible AI stewardship that respects user rights and societal values.
Multi-Sector Collaboration and Challenges
Participants stressed the need for broadening the adoption of PETs through multi-sector collaboration and addressing legal, technical, and operational challenges, including harmonising privacy safeguards with data utility and compliance demands.
In summary, the RCN event highlighted how these technologies—often operating synergistically—are shaping the future of privacy-preserving AI systems that uphold ethical standards, fairness, and data representativeness, fostering trust and regulatory compliance in an evolving digital privacy landscape.
For subject matter experts on PETs or those using PETs, opportunities to contribute to their future use and regulation are available by signing up for the Expert or Regular Sub-Groups. The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics was launched by FPF on July 9th. Regular meetings will be held between the two main groups (Experts and Regulators) to provide substantive feedback on the RCN's progress.
- The event, organized by the White House Office of Science and Technology Policy, focused on Privacy Enhancing Technologies (PETs), essential for ethical and representative AI systems.
- Differential privacy, a data protection techniqueAdding statistical noise, was demonstrated in various contexts, like data deletion compatibility with GDPR.
- Federated learning, a method that allows multiple parties to collaboratively train AI models without sharing raw data, mitigates privacy risks associated with centralizing personal data.
- Fully Homomorphic Encryption (FHE) was applied in a cross-border financial fraud detection use case, enabling privacy-preserved computations on encrypted data.
- Secure Multiparty Computation (SMPC) permits joint data processing without revealing private inputs, useful for scenarios requiring collective insights without data pooling.
- Zero-Knowledge Proofs (ZKP) and Trusted Execution Environments (TEE) offer cryptographically sound and hardware-assisted guarantees respectively, for secure data or computation validation without sensitive information exposure.
- The importance of embedding privacy-by-design and ethical AI principles throughout AI development lifecycles was emphasized, aiming to align technological solutions with emerging data protection laws.
- Discussions stressed a shift towards responsible AI stewardship, deploying PETs not only for data protection but also for bias mitigation and ethical compliance.
- Multi-sector collaboration and addressing legal, technical, and operational challenges, such as harmonizing privacy safeguards with data utility and compliance demands, are crucial for broadening the adoption of PETs.
- The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics, launched by FPF on July 9th, offers opportunities for subject matter experts on PETs to contribute to their future use and regulation.
- Regular meetings will be held between the two main groups (Experts and Regulators) through the RCN to provide substantive feedback on its progress in the field of data-and-cloud-computing, technology, artificial-intelligence, education-and-self-development, policy-and-legislation, politics, online-education, and general-news.