anthropic users face a new choice – opt out or share your chats for ai training

Anthropic's Data Usage Policy Changes and User Choice
Significant alterations are being implemented by Anthropic regarding the handling of user data. All individuals utilizing Claude are now required to make a decision by September 28th concerning the utilization of their conversations for the purpose of training artificial intelligence models.
Details of the Policy Shift
Previously, Anthropic refrained from employing consumer chat data in its model training processes. However, the company now intends to leverage user conversations and coding sessions to enhance its AI systems. Consequently, data retention will be extended to five years for users who do not actively opt out of this practice.
This represents a substantial update to previous policies. Previously, prompts and conversation outputs from Anthropic’s consumer products were automatically deleted within 30 days, unless legal or policy requirements dictated otherwise, or if the input violated established guidelines.
Policy Application
These new policies apply specifically to users of Claude Free, Pro, and Max, including those who utilize Claude Code. Notably, business clients employing Claude Gov, Claude for Work, Claude for Education, or API access will remain unaffected, mirroring a similar approach taken by OpenAI to safeguard its enterprise customers.
Rationale Behind the Changes
Anthropic frames these changes as empowering user choice, stating that opting out will allow users to “help us improve model safety,” refining the accuracy of harmful content detection. Furthermore, users who consent to data usage will contribute to the advancement of Claude’s capabilities in areas such as coding, analysis, and reasoning.
Essentially, the company is requesting assistance in improving its services. However, the underlying motivations likely extend beyond altruism.
The Importance of Data in AI Development
Like all major large language model companies, Anthropic recognizes the critical importance of data for AI model training. Access to millions of Claude interactions provides valuable real-world content that can bolster Anthropic’s competitive standing against companies like OpenAI and Google.
Broader Industry Trends and Legal Scrutiny
Beyond competitive pressures, these changes also reflect wider shifts in data policies within the industry. Companies like Anthropic and OpenAI are facing increased scrutiny regarding their data retention practices. OpenAI, for example, is currently contesting a court order mandating the indefinite retention of all consumer ChatGPT conversations, even those deleted, due to a lawsuit initiated by The New York Times and other publishers.
OpenAI’s COO, Brad Lightcap, described this order as “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The order impacts ChatGPT Free, Plus, Pro, and Team users, while enterprise customers and those with Zero Data Retention agreements are exempt.
User Confusion and Policy Transparency
A significant concern is the confusion generated by these evolving usage policies, with many users remaining unaware of the changes.
Given the rapid pace of technological advancement, policy updates are inevitable. However, many of these changes are substantial and often mentioned only briefly alongside other company news.
Design and User Experience
The design of these policies often contributes to user unawareness. Many ChatGPT users repeatedly click “delete” toggles that do not actually delete their data. Anthropic’s implementation of its new policy follows a similar pattern.
New users will be presented with their preference during signup. Existing users, however, will encounter a pop-up featuring “Updates to Consumer Terms and Policies” prominently displayed, alongside a large black “Accept” button. A smaller toggle switch for training permissions is located below, in smaller print, and is pre-set to “On.”
As noted by The Verge, this design raises concerns that users may quickly accept the terms without realizing they are consenting to data sharing.
The Importance of Meaningful Consent
The stakes for user awareness are exceptionally high. Privacy advocates have consistently warned that the complexity of AI makes genuine user consent exceedingly difficult to achieve. The Federal Trade Commission, under the Biden administration, cautioned AI companies against surreptitiously altering terms of service or burying disclosures in complex language.
The current status of the FTC’s oversight of these practices, given its reduced commission size, remains uncertain, a question that has been directly posed to the agency.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
