Sam Altman Says Bots Are Making Social Media Feel Fake

social media bots AI-generated content OpenAI Reddit fake posts
Ankit Agarwal
Ankit Agarwal

Marketing Head

 
September 9, 2025 3 min read

OpenAI CEO and Reddit shareholder Sam Altman recently voiced his frustration about the growing influence of bots on social media platforms. In a post on X (formerly Twitter), he admitted that social platforms today feel “fake” because it is nearly impossible to tell whether posts are created by humans or generated by AI-driven bots.

Altman’s Realization on Reddit

Altman’s comments stemmed from his experience browsing the r/Claudecode subreddit, where many posts praised OpenAI’s new coding assistant, Codex. The subreddit had become so saturated with “I switched to Codex” posts that it led him to question whether those accounts were genuine or automated.

He explained, “I assume it’s all fake or bots, even though I know Codex growth is very strong and the trend here is real.”

Why Social Media Feels “Fake”

Altman broke down his reasoning in real time, pointing to several factors:

  • Real users unintentionally mimicking “LLM-speak” after exposure to AI-written content.

  • Social media behavior clustering, where the Extremely Online crowd tends to sound alike.

  • The cyclical hype patterns of online communities (“it’s so over / we’re so back”).

  • Engagement-first algorithms and monetization pressure encouraging inauthentic interaction.

  • Past cases of astroturfing, where companies secretly pay for posts to boost visibility.

  • The undeniable presence of bots alongside real users.

He argued that a mix of these dynamics makes platforms like Reddit and X appear more artificial than ever before.

A Problem Largely of AI’s Own Making

Ironically, Altman acknowledged that humans now sound more like large language models (LLMs)—systems that were themselves trained to mimic patterns of human speech. OpenAI’s models, for example, learned heavily from platforms like Reddit, where Altman was once a board member and is still a major shareholder.

At the same time, fan communities around AI tools have become polarized. Once-enthusiastic subreddits such as r/ChatGPT turned hostile after the rocky launch of GPT‑5.0, with users complaining about degraded output, higher costs, and unfinished tasks. Negative posts were amplified more than positive feedback, further fueling suspicion about authenticity.

The Growing Bot Problem Online

The scale of the issue extends far beyond OpenAI or Reddit. According to cybersecurity firm Imperva, more than 50% of all internet traffic in 2024 was generated by bots, many powered by modern AI technologies. X’s own Grok bot highlighted that hundreds of millions of bots are likely active on the platform.

These numbers point to a reality where much of social media interaction may be synthetic, whether created by automated scripts, commercial marketing campaigns, or LLM-powered accounts.

Is Altman Hinting at a New Social Network?

Some analysts believe Altman’s remarks could be tied to OpenAI’s rumored project to build its own social media platform. Reports earlier this year suggested OpenAI was exploring a competitor to X and Facebook. Critics have speculated that Altman may be setting the stage by undermining trust in existing platforms.

Whether or not this is true, it raises a question: could any new platform—especially one built by an AI-first company—actually remain bot-free?

Even Bots Create Echo Chambers

Interestingly, research has shown that even bot-only platforms are not immune to human-like behavior. A University of Amsterdam experiment in 2025 demonstrated that social networks composed entirely of bots quickly formed cliques, reinforced biases, and built toxic echo chambers among themselves.

This suggests that the issue is not just humans versus bots, but the underlying structures of online engagement itself.

The Takeaway

Altman’s comments highlight a growing crisis of authenticity in the digital age. As bots and LLM-driven accounts flood platforms, human users risk being drowned out—or worse, mistaken for AI themselves. With over half of internet traffic already automated, the future of social media may hinge less on who participates and more on whether authenticity can still be preserved.

Ankit Agarwal
Ankit Agarwal

Marketing Head

 

Ankit Agarwal is the Marketing Head at LogicBalls, an innovative AI-driven content generation platform. With deep expertise in on-page and off-page SEO, he specializes in crafting strategies that drive organic traffic and boost search engine rankings. Ankit is also a thought leader in AI for writing, leveraging cutting-edge technology to optimize content creation and marketing efficiency. His passion lies in merging AI with SEO to help brands scale their digital presence effortlessly.

Related News

Top 10 Email Deliverability Tools for Maximum Inbox Success

Top 10 Email Deliverability Tools for Maximum Inbox Success

Discover top email deliverability agencies and tools to ensure your emails land in inboxes, not spam. Elevate your email marketing strategy today!

By Govind Kumar November 25, 2025 3 min read
Read full article
Hugging Face CEO Warns of an LLM Bubble—Not an AI Bubble
Artificial Intelligence

Hugging Face CEO Warns of an LLM Bubble—Not an AI Bubble

Hugging Face CEO warns the AI industry is facing an LLM bubble, not an AI bubble. Learn why specialized models may shape the future of AI.

By Ankit Agarwal November 20, 2025 4 min read
Read full article
Franklin Templeton Adopts Agentic AI with Wand AI
Finance

Franklin Templeton Adopts Agentic AI with Wand AI

Franklin Templeton partners with Wand AI to deploy agentic AI across global operations, boosting investment research, automation, and data-driven decision-making.

By Deepak Gupta November 18, 2025 2 min read
Read full article
AI Web Search Risks: How Companies Can Protect Themselves from Data Accuracy Threats
AI web search

AI Web Search Risks: How Companies Can Protect Themselves from Data Accuracy Threats

Learn how AI web search inaccuracies create risks for businesses and how proper governance, verification, and policies can reduce compliance and data errors.

By Nikita Shekhawat November 18, 2025 4 min read
Read full article