Sam Altman, a Reddit shareholder and enthusiast, has expressed concerns about the authenticity of social media posts. He believes that bots have made it difficult to determine if posts are written by humans, even in subreddits promoting AI tools like OpenAI Codex. Altman attributes this to various factors, including the rise of LLMs, human behavior, and optimization pressure from social platforms. He notes that humans are starting to mimic the language of LLMs, making it challenging to distinguish between the two.
Sam Altman, a prominent figure in the tech industry and Reddit shareholder, has recently expressed concerns about the authenticity of social media posts. In a series of tweets, Altman suggested that the increasing presence of bots has made it challenging to distinguish between human and AI-generated content, even in subreddits dedicated to AI tools like OpenAI Codex [1].
Altman's observations are rooted in his experience with the r/Claudecode subreddit, which has been flooded with posts from users claiming to have switched to OpenAI Codex. This trend has led Altman to question the authenticity of these posts, suggesting that many might be generated by bots [1].
He attributes this phenomenon to several factors, including the rise of large language models (LLMs) and the optimization pressure exerted by social media platforms. Altman notes that humans are increasingly adopting the language patterns of LLMs, making it difficult to differentiate between human and AI-generated content [1].
Moreover, Altman points out that the incentives for social media platforms to boost engagement can lead to an over-reliance on bots, further exacerbating the problem. He also suggests that the hype cycle and the behavior of online communities can contribute to the spread of bot-generated content [1].
The issue of bot-generated content is not limited to social media. Data security company Imperva reported that over half of all internet traffic in 2024 was non-human, largely due to LLMs. This trend highlights the growing prevalence of bots and the challenges it poses to various sectors, including education, journalism, and the courts [1].
While Altman's concerns are valid, some have suggested that his comments may be part of a broader marketing strategy for OpenAI's rumored social media platform. However, regardless of his motivations, the issue of bot-generated content on social media is a significant one that requires attention from both tech companies and policymakers [1].
References:
[1] https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-are-making-social-media-feel-fake/
Comments
No comments yet