European Data Protection Board adopts opinion on AI Models 

The incoming administration should lay out a positive vision of what it seeks to achieve with AI.
Var ShankarVar Shankar
Var Shankar
2 Jan
2025
European Data Protection Board adopts opinion on AI Models 

On December 17, 2024 the European Data Protection Board (EDPB)adopted EDPB Opinion 28/2024 regarding the use of personal data to train and deploy AI models. The opinion comes in response to a request from Ireland’s data protection authority (DPA). 

Large model developers, including Meta and OpenAI, that provide AI models within the EU have been subject to scrutiny by EU policymakers and regulators. These developers, companies using their products, and regulatory authorities in EU countries have all sought clear guidance on how to train and deploy AI models on personal data without violating the GDPR.

EDPB Opinion 28/2024 provides guidance on these matters. Though is not binding upon companies, DPAs will likely align with its guidance when interpreting the GDPR and prioritizing enforcement actions. 

When is a model considered anonymous – and therefore not subject to GDPR?

Though anonymous AI models are not subject to the GDPR, analysts to date have had varying opinions on whether a trained foundation AI model is anonymous.

EDPB Opinion 28/2024 states that such a model can be considered anonymous if there is a low likelihood of being able “(1) to directly or indirectly identify individuals whose data was used to create the model, and (2) to extract such personal data from the model through queries.” The opinion notes that this analysis must be conducted on a case-by-case basis and includes a non-prescriptive and non-exclusive list of methods that can help achieve anonymity.

When can an AI developer process personal data?

Per the GDPR, companies need a legal basis to process personal data. Analysts have noted that of the available legal bases, only “legitimate interest” seems relevant to how leading developers train foundation AI models. EDPB Opinion 28/2024 confirms that a company can use “legitimate interest” as a basis to train AI models using personal data, after an analysis of three factors: 

Purpose: whether the interest is lawful, clearly and precisely articulated, and real and present;

Necessity: whether personal data is necessary to achieve the purpose;

Balancing: based on an analysis of the benefits, drawbacks, and expectations of individuals whose data is processed, whether the interests and rights of such individuals do not outweigh the legitimate interest of the company

EDPB Opinion 28/2024 also describes how an AI model developed on personal data in violation of the GDPR may either not be deployed or be deployed in a limited way subject to assessments and safeguards.

What’s next?

The EU will continue to provide interpretation and guidance on how the GDPR, the EU AI Act (AIA), and other EU laws will apply to foundation AI models, even as large model developers continue to release impressive new models.

EDPB Opinion 28/2024 does not provide clarity on many other intersections of privacy and AI, including privacy by design or the processing of other sensitive information (like the user’s state of mind or political views).

Companies using AI systems should carefully monitor regulatory developments in the EU and across the world and incorporate relevant interpretation and guidance into their AI governance programs.

Enzai is here to help

Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.