Italy’s data protection authority, Garante, has imposed a €15m fine on OpenAI, the U.S.-based artificial intelligence company, for failing to comply with legal standards in personal data handling with its ChatGPT service.
According to The Associated Press, the authority’s investigation revealed that OpenAI did not have a sufficient legal basis for processing user data and breached transparency principles required by EU privacy laws.
Garante highlighted that OpenAI’s practices involved processing extensive personal data to train the ChatGPT algorithm without properly informing users, violating their privacy rights. Despite OpenAI describing the fine as “disproportionate” and announcing plans to appeal, the watchdog insists on its decision, referencing a prior temporary suspension of ChatGPT in Italy during 2023, which was later lifted after compliance adjustments by OpenAI.
Additionally, the investigation brought to light the lack of an effective age verification mechanism that could prevent minors under 13 from accessing potentially inappropriate content generated by ChatGPT. In response, Garante has mandated OpenAI to conduct a six-month educational campaign across various Italian media platforms to enhance public awareness about the data collection practices of ChatGPT.
This case forms part of broader regulatory efforts across the globe, particularly in the U.S. and Europe, where authorities are intensively scrutinizing AI technologies. The ongoing debates and regulatory measures, including the EU’s comprehensive AI Act, aim to mitigate risks associated with AI systems and ensure they operate within the bounds of user privacy and data protection laws.
Copyright © 2018 RegTech Analyst


