Italy puts ChatGPT on notice for alleged privacy missteps (yes, again)



summary
Summary

Italy’s data protection authority, Garante, has accused OpenAI of violating European privacy laws with ChatGPT and has given the company 30 days to respond.

The accusation follows a months-long investigation after Italy initially blocked access to the chatbot over privacy concerns. OpenAI had announced measures to address those concerns, but the latest decision suggests that the investigation did not satisfy the authority.

Following the temporary ban on processing imposed on OpenAI by the Garante on 30 March of last year, and based on the outcome of its fact-finding activity, the Italian DPA concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR.

Garante

It’s unclear what the specific violation is, but potential penalties could include a fine of up to €20 million or up to 4% of OpenAI’s global revenue, as well as possible changes to ChatGPT’s basic functionality. ChatGPT is (maybe) GDPR-compliant today, but it was not after launch, and OpenAI was illegally collecting data from users and using that data to train its models.

OpenAI struggles with European privacy rights

ChatGPT has been under scrutiny by various authorities in terms of compliance with the General Data Protection Regulation (GDPR), which is one of the most stringent data protection regulations in the world.

Ad

Ad

The GDPR gives European citizens certain rights as “data subjects”, including the right to be informed about how their data is collected and used, and the right to have that data deleted, even if it is publicly available.

In addition to Italy, authorities in France, Ireland, and Germany are investigating how OpenAI collects and uses data. OpenAI has engaged with privacy regulators to answer their questions and receive feedback but does not disclose the datasets it uses to train its AI models.

It also published a statement on its website detailing how it plans to use better filters on private data in the future, not use private data to train AI, and – “where feasible” – delete that data from the system at the user’s request.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top