Google’s Generative AI Under Investigation by EU Regulators
The European Union’s lead privacy regulator has opened an investigation into whether Google has complied with the bloc’s data protection laws in relation to the use of people’s information for training generative AI.
Background on the Investigation
Specifically, the Irish Data Protection Commission (DPC) is looking into whether Google needed to carry out a data protection impact assessment (DPIA) prior to engaging in the processing of personal data associated with the development of its foundational AI Model, Pathways Language Model 2 (PaLM 2). This investigation forms part of the wider efforts by the DPC and its EU/EEA peer regulators to regulate the processing of personal data in the development of AI models and systems.
The Risks Associated with Generative AI
Generative AI tools are infamous for producing plausible-sounding falsehoods, which creates a significant legal risk for their makers. The ability of these tools to serve up personal information on demand exacerbates this risk. As a result, regulators are scrutinizing the types of information used as AI training fodder and how it was obtained.
Regulatory Pressure Mounts
This investigation is not an isolated incident. A number of LLMs have already faced questions and GDPR enforcement related to privacy compliance. For example, OpenAI, the maker of GPT and ChatGPT, has attracted GDPR complaints over its use of people’s data for AI training. Similarly, Meta, which develops the Llama AI model, has paused plans to train AI using European users’ data due to regulatory pressure.
The Importance of Data Protection Impact Assessments
A DPIA can be of crucial importance in ensuring that the fundamental rights and freedoms of individuals are adequately considered and protected when processing personal data is likely to result in a high risk. In this case, Google’s use of vast amounts of data to train its generative AI models raises significant concerns about the protection of EU citizens’ personal information.
Google’s Response
In response to the investigation, Google stated that it takes seriously its obligations under the GDPR and will work constructively with the DPC to answer their questions. However, the company did not engage with questions about the sources of data used to train its generative AI tools.
The Broader Implications
This investigation highlights the growing regulatory scrutiny surrounding the use of personal data for AI training. As regulators continue to grapple with the complex issues surrounding AI and data protection, one thing is clear: companies must prioritize transparency and accountability in their use of personal data or risk facing significant consequences.
Timeline of Key Events
- The Irish Data Protection Commission (DPC) opens an investigation into whether Google has complied with the bloc’s data protection laws in relation to the use of people’s information for training generative AI.
- Regulators begin scrutinizing the types of information used as AI training fodder and how it was obtained.
- OpenAI, the maker of GPT and ChatGPT, attracts GDPR complaints over its use of people’s data for AI training.
- Meta pauses plans to train AI using European users’ data due to regulatory pressure.
Key Players
- Irish Data Protection Commission (DPC)
- OpenAI
- Meta
Stay Up-to-Date with the Latest Developments in AI and Data Protection
Sign up for our newsletter to receive the latest news, insights, and analysis on the complex issues surrounding AI and data protection.
By submitting your email address, you agree to our Terms and Privacy Notice.