Tuesday, November 5, 2024

Europe’s privacy watchdog probes Google over data used for AI training

Must read

Enlarge / Google’s booth at the Integrated Systems Europe conference on January 31, 2023, in Barcelona, Spain.

Getty Images | Cesc Maymo

Google is under investigation by Europe’s privacy watchdog over its processing of personal data in the development of one of its artificial intelligence models, as regulators ramp up their scrutiny of Big Tech’s AI ambitions.

Ireland’s Data Protection Commission, which is responsible for enforcing the EU’s General Data Protection Regulation, said it had launched a statutory inquiry into the tech giant’s Pathways Language Model 2, or PaLM 2.

PaLM 2 was launched in May 2023 and predates Google’s latest Gemini models, which power its AI products. Gemini, which was launched in December of the same year, is now the core model behind its text and image-generation offering.

The inquiry will assess whether the company has breached its obligations under GDPR on the processing of the personal data of citizens of the EU and European Economic Area.

Under the framework, companies must conduct a data protection impact assessment before embarking on handling such information when the nature of the way it is used is likely to pose a high risk to the rights and freedoms of individuals.

This applied in particular to new technologies and was “of crucial importance in ensuring that the fundamental rights and freedoms of individuals are adequately considered and protected,” the regulator said in a statement.

The assessment is being examined in the investigation.

A Google spokesperson said: “We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions.”

This is the latest in a series of actions by the DPC against the Big Tech groups that are building large language models.

In June, Meta paused its plans to train its model Llama on public content shared by adults on Facebook and Instagram across Europe, following discussions with the Irish regulator. Meta subsequently limited the availability of some of its AI products to users in the region.

A month later, X users discovered that they were being “opted in” to having their posts to the site used to train systems on Elon Musk’s xAI startup.

The platform suspended its processing of several weeks’ worth of European user data that had been harvested to train its Grok AI model, following legal proceedings by the DPC. That was the first time that the regulator had used its powers to take such action against a tech firm.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Latest article