LOGO

ChatGPT Privacy Complaint: Defamatory Hallucinations

March 20, 2025
ChatGPT Privacy Complaint: Defamatory Hallucinations

OpenAI Faces New European Privacy Challenge

OpenAI is currently addressing another privacy complaint originating in Europe. This concerns its widely used AI chatbot and its propensity for generating inaccurate, fabricated information.

False Information and its Implications

A privacy advocacy organization, Noyb, is backing a Norwegian individual who discovered ChatGPT had falsely reported he was convicted of murdering two children and attempting to harm a third. This discovery has understandably caused significant distress.

Previous complaints regarding ChatGPT centered on inaccuracies like incorrect birthdates or biographical details. A key issue is the lack of a mechanism for individuals to correct false information generated about them by the AI.

GDPR and the Right to Rectification

While OpenAI typically blocks responses to prompts that elicit such errors, the European Union’s General Data Protection Regulation (GDPR) grants European citizens rights regarding their personal data. This includes the right to have inaccurate personal data rectified.

The GDPR also mandates that data controllers ensure the accuracy of personal data they process. This is a central point Noyb is emphasizing in its latest complaint.

Noyb's Stance on Data Accuracy

“The GDPR is unequivocal: personal data must be accurate,” stated Joakim Söderberg, a data protection lawyer at Noyb. “Users have the right to correction when inaccuracies exist. A simple disclaimer acknowledging potential errors is insufficient.

Disseminating false information, even with a disclaimer stating its potential inaccuracy, is unacceptable.”

Potential Penalties and Enforcement

Violations of the GDPR can result in substantial penalties, reaching up to 4% of a company’s global annual revenue.

Enforcement actions could also necessitate changes to AI products. For instance, an earlier intervention by Italy’s data protection authority, which temporarily blocked ChatGPT access in spring 2023, prompted OpenAI to revise its user disclosures.

This led to a subsequent fine of €15 million for processing data without a lawful basis.

A Cautious Approach from Regulators

Since then, European privacy watchdogs have generally adopted a more measured approach to generative AI, seeking to determine the best way to apply the GDPR to these emerging technologies.

Ireland’s Data Protection Commission (DPC), leading a previous Noyb complaint, cautioned against hasty bans on generative AI tools, advocating for a period of assessment to clarify legal applicability.

Ongoing Investigations

A privacy complaint against ChatGPT, filed with Poland’s data protection authority in September 2023, remains unresolved, indicating a deliberate pace of investigation.

A Call for Action

Noyb’s current complaint aims to refocus attention on the risks associated with AI systems prone to “hallucinations” and the need for robust regulatory oversight.

A Disturbing Case of AI-Generated Defamation

A recent incident has deeply affected a local community, centering around a complaint filed against OpenAI, the creators of ChatGPT. The complaint, submitted by the nonprofit organization Noyb, concerns the AI’s fabrication of damaging and untrue information.

False Accusations and the Source of the Complaint

The issue arose when ChatGPT, in response to a query about “Arve Hjalmar Holmen,” falsely asserted that he had been convicted of child murder and sentenced to 21 years imprisonment for the deaths of his two sons. Noyb shared a screenshot of this interaction with TechCrunch, highlighting the severity of the AI’s “hallucination.”

Despite the entirely fabricated nature of the claim, the AI’s response contained some factual elements. It correctly noted that Holmen has three children and accurately identified their genders and his hometown. This combination of truth and falsehood makes the incident particularly unsettling.

Investigation and Potential Causes

Noyb representatives stated they conducted thorough research to rule out any possibility of mistaken identity. They examined newspaper archives but found no explanation for why the AI generated such a horrific and untrue narrative.

Experts suggest that the underlying mechanism of large language models – predicting the next word based on vast datasets – may be a contributing factor. The training data could contain numerous stories of filicide, influencing the AI’s response to a query about a specific individual.

Legal Implications Under GDPR

Noyb argues that such outputs are not only unacceptable but also unlawful under EU data protection regulations, specifically the GDPR. While OpenAI displays a disclaimer acknowledging potential errors, Noyb contends this does not absolve the company of its responsibility to prevent the generation of egregious falsehoods.

OpenAI responded to the complaint through its European PR firm, Headland Consultancy. A spokesperson stated the company is actively researching ways to improve accuracy and reduce hallucinations, and noted the version of ChatGPT in question has been updated with online search capabilities to enhance accuracy.

A Pattern of Fabricated Information

This GDPR complaint is not an isolated incident. Noyb points to other cases where ChatGPT fabricated legally compromising information, including false accusations against an Australian mayor and a German journalist. This suggests a systemic issue with the AI tool.

Recent Improvements and Remaining Concerns

Following an update to the AI model, ChatGPT reportedly stopped generating the false information about Holmen. Noyb attributes this change to the tool’s new ability to search the internet for information when asked about individuals, rather than relying solely on its internal dataset.

However, both Noyb and Holmen remain concerned that the incorrect and defamatory information may still be retained within the AI model’s memory.

The Importance of Legal Compliance

“Adding a disclaimer that you do not comply with the law does not make the law go away,” emphasized Kleanthi Sardeli, a data protection lawyer at Noyb. She further stated that AI companies cannot simply hide false information from users while continuing to process it internally.

Sardeli added, “AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage.”

Complaint Filed with Norwegian Authorities

Noyb has filed the complaint against OpenAI with the Norwegian data protection authority, hoping for an investigation despite the company’s U.S. headquarters and the potential jurisdictional complexities.

Previous Complaint and Ongoing Delays

A previous GDPR complaint filed by Noyb against OpenAI in Austria was referred to Ireland’s Data Protection Commission (DPC) after OpenAI designated its Irish division as the provider of ChatGPT to European users.

As of the latest update, this complaint remains under investigation by the DPC, with no clear timeline for a conclusion.

Risteard Byrne, assistant principal officer communications for the DPC, confirmed the formal handling of the complaint began in September 2024 and is still ongoing.

This article was updated to include OpenAI’s official statement.

#ChatGPT#OpenAI#AI#privacy#defamation#hallucinations