Italy’s data protection authority, Garante, raised concerns that OpenAI’s ChatGPT may violate the European Union’s General Data Protection Regulation (GDPR).
This stems from an investigation initiated last year following a temporary ban on ChatGPT in Italy. The probe highlighted potential issues related to the handling of personal data and issues surrounding age verification processes.
The Italian Data Protection Authority’s investigation was motivated by several factors, including incidents where ChatGPT messages and payment details were inadvertently exposed and a lack of age control for minors.
Additionally, the authority questioned the legality of OpenAI’s data collection practices used to train ChatGPT and raised the alarm about the AI’s occasional generation of inaccurate information about individuals.
AI companies have been under intense scrutiny for their data scraping practices, with numerous copyright lawsuits currently in progress.
There are also several examples of AI “hallucinations” libelously smearing individuals, sometimes accidentally indicting them in crimes like embezzlement or sexual harassment. Libel lawsuits have been brought against AI developers but are currently inconclusive.
In response to these allegations, OpenAI has asserted its commitment to compliance with GDPR and other privacy laws, emphasizing its dedication to data protection and privacy.
OpenAI’s statement further highlighted its efforts to minimize the inclusion of personal data in its training processes and its system’s design to reject requests for private or sensitive information, stating, “We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy.”
The global scrutiny of AI investments
Meanwhile, the US Federal Trade Commission (FTC), under Chair Lina Khan, is investigating the relationships between leading AI startups, such as OpenAI, and tech giants like Microsoft, Amazon, and Google.
This week, several civil rights groups and organizations, such as the Mozilla Foundation, questioned Microsoft and OpenAI’s relationship, encouraging the European Commission to investigate them for breaching antitrust regulations.
The FTC’s investigations similarly aim to assess whether these partnerships grant undue influence or privileged access to the larger firms, undermining fair competition.
As these technologies become increasingly embedded in various aspects of society, the need for a regulatory framework that balances innovation with ethical considerations, privacy protection, and market fairness is obvious. It remains forthcoming.
The outcomes of these investigations and regulatory efforts will likely set important precedents for the governance of AI technologies in the future, shaping the trajectory of AI development and its integration into the global digital economy.