Falling for AI-Generated Photos: A Growing Concern in Media Literacy

Saturday, Sep 20, 2025 7:17 am ET2min read

A friend's social media account was hacked using AI-generated photos to promote a scam. People are falling for it, and the author feels AI is ruining media literacy.

In recent months, the use of artificial intelligence (AI) to generate convincing but fraudulent content has become a growing concern, particularly on social media platforms. A recent incident involving the hacking of a friend's social media account using AI-generated photos to promote a scam has highlighted the urgent need to address the issue of media literacy.

The incident, which occurred on September 12, 2025, has led many to question the role of AI in shaping public perception and the potential consequences for media literacy. AI-generated content can be highly convincing, making it difficult for users to discern between genuine and fraudulent information. This has significant implications for social media platforms, which are increasingly under scrutiny for their role in facilitating the spread of misinformation.

The use of AI to generate content is not new, but its impact has been amplified by the widespread adoption of social media. According to a recent study, the number of AI-generated posts on social media has increased by 300% in the past year alone Charlie Kirk assassination reignites debate over Section 230 protections for social media companies[1]. This surge in AI-generated content has led to a rise in scams and fraudulent activities, with users falling victim to convincing but false information.

The issue of media literacy has become a pressing concern for policymakers and technology companies alike. As AI continues to evolve, the challenge of distinguishing between genuine and fraudulent content will only become more complex. Social media platforms have a responsibility to address this issue by implementing robust moderation policies and providing users with the tools they need to critically evaluate the information they encounter.

In response to the growing concern over AI-generated content, several lawmakers have called for reforming Section 230, a federal law that provides liability protections for social media platforms. Sen. Lindsey Graham, R-S.C., has introduced a bill to increase liability for social media platforms whose platforms are used to disseminate content related to the sexual exploitation of children . The debate over Section 230 is likely to continue, with advocates arguing that the law is essential for protecting free speech and others contending that it provides too much immunity for platforms that facilitate harmful content.

The impact of AI on media literacy is a complex issue that requires a multi-faceted approach. Social media platforms must take proactive steps to combat the spread of fraudulent content, while policymakers must consider the appropriate balance between protecting free speech and holding platforms accountable for the content they facilitate. As AI continues to evolve, it is crucial that we remain vigilant in our efforts to promote media literacy and protect users from the harmful consequences of fraudulent content.

Falling for AI-Generated Photos: A Growing Concern in Media Literacy

Comments



Add a public comment...
No comments

No comments yet