sam altman says that bots are making social media feel ‘fake’

The Challenge of Authenticity in the Age of AI
Sam Altman, a prominent figure as both an X enthusiast and a shareholder in Reddit, recently expressed a concern: the proliferation of bots has blurred the lines between genuine human expression and automated content on social media platforms, he shared via a post on X.
The Spark of Realization
This observation stemmed from his engagement with the r/Claudecode subreddit. He was observing discussions surrounding OpenAI Codex, a software programming service launched by OpenAI in May as a competitor to Anthropic’s Claude Code.
The subreddit had become saturated with posts from users claiming to have switched from Claude Code to Codex. This trend prompted a Reddit user to jokingly question whether one could adopt Codex without announcing the change on the platform.
Questioning the Source
Altman found himself questioning the authenticity of these posts. He admitted, “I assume it’s all fake/bots, even though in this case I know codex growth is really strong and the trend here is real.”
He then detailed his reasoning, identifying several contributing factors. These included people adopting language patterns characteristic of Large Language Models (LLMs), the tendency for online communities to exhibit correlated behavior, and the influence of hype cycles and engagement-driven monetization strategies.
The Paradox of Mimicry
Altman pointed out a seeming paradox: humans are beginning to emulate the speech patterns of LLMs. This is despite the fact that LLMs were originally designed to mimic human communication, utilizing features like the em dash.
Furthermore, OpenAI’s models were trained on data from platforms like Reddit, where Altman previously served on the board and held a significant shareholder stake.
The Dynamics of Online Communities
He acknowledged that online fandoms, particularly those with highly active users, often exhibit unusual behaviors. Groups can easily devolve into negativity when overwhelmed by individuals expressing frustrations.
Altman also critiqued the incentives that encourage engagement on social media platforms, as both sites and creators benefit financially from increased activity.
Concerns About "Astroturfing"
Interestingly, Altman suggested that the pro-OpenAI sentiment on the subreddit might be artificially inflated, citing concerns about “astroturfing.” This refers to the practice of using fake accounts or paid posters to create a false impression of public support.
OpenAI's Own Experiences with Criticism
This concern arises despite OpenAI’s own experience with negative feedback. Following the release of GPT 5.0, OpenAI subreddits were flooded with critical posts, which were prominently upvoted.
Users voiced complaints about GPT’s personality and its tendency to consume credits without completing tasks. Altman addressed these issues in a Reddit AMA session, promising improvements.
A Growing Sense of Inauthenticity
The GPT subreddit has struggled to regain its previous positive atmosphere, with users continuing to express dissatisfaction with the changes introduced in GPT 5.0. This leads Altman to conclude, “The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
The Broader Implications
If this perception is accurate, the question arises: who is responsible? The advancements in LLMs have created a situation where these models pose a threat not only to social media but also to fields like education, journalism, and the legal system.
While the exact number of bot-generated posts is unknown, data suggests it is substantial. Imperva reported that over half of all internet traffic in 2024 originated from non-human sources, largely driven by LLMs.
Even X’s own bot, Grok, estimates that hundreds of millions of bots are active on the platform.
Potential Motives and Future Prospects
Some speculate that Altman’s concerns are a prelude to marketing OpenAI’s rumored social media platform, which was reported by The Verge to be in early development in April.
However, regardless of his motives, the question remains: could OpenAI create a social network free from bots? And, ironically, even a platform exclusively populated by bots would likely exhibit similar patterns of groupthink and echo chambers, as demonstrated by research at the University of Amsterdam.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
