Tea App Data Breach Exposes 72,000 Users Via AI-Generated Code Security Lapse

Coin WorldFriday, Jul 25, 2025 6:32 pm ET
1min read
Aime RobotAime Summary

- Tea app's data breach exposed 72,000 users' IDs, selfies, and private messages via unsecured Firebase database.

- AI-generated "vibe coding" and default public access settings caused security failures, contradicting app's safety claims.

- Leaked data spreads on BitTorrent despite deletion attempts, risking identity theft and harassment for women-focused platform.

- Experts warn 48% of AI code has vulnerabilities, highlighting systemic risks as 25% of startups use such code for core features.

- Incident raises legal compliance questions and underscores dangers of prioritizing speed over security in app development.

A viral women-only dating safety app called Tea has suffered a catastrophic data breach, exposing over 72,000 user records, including government-issued IDs, selfies, and private messages. The breach, uncovered by users on 4chan, revealed that the app’s backend database was left unsecured, lacking passwords, encryption, or authentication. The leaked data—spanning 59.3 GB—includes 13,000 verification selfies and IDs, tens of thousands of user-generated images, and messages from as recently as 2024 and 2025 [1]. These details contradict Tea’s previous claims that the breach involved only “old data,” underscoring a critical failure in its security infrastructure.

The app, which rose to 1 on the App Store with 4 million users, was marketed as a safe space for women to discuss romantic relationships. However, the breach has exposed sensitive information that could enable identity theft or harassment. Verification documents, such as government IDs, are now publicly searchable on decentralized platforms like BitTorrent, where automated scripts continue to spread the data even after the initial 4chan thread was deleted. Users’ private direct messages, a core feature of the app, were also compromised.

Tea’s security lapse has been attributed to “vibe coding,” a practice where developers rely on AI tools like ChatGPT to generate code without rigorous security reviews. The original hacker noted that Tea’s Firebase bucket was configured by default to be publicly accessible, lacking authentication [1]. This approach, while expedient, has left the app vulnerable to exploitation. Researchers from Georgetown University warn that 48% of AI-generated code contains exploitable flaws, yet 25% of Y Combinator startups use such code for core features [1]. Cybersecurity experts like Santiago Valdarrama have criticized the trend, emphasizing that AI-generated code often lacks the safeguards needed to prevent breaches [1].

The incident has sparked public backlash and raised questions about Tea’s compliance with data protection laws. The company has not disclosed a detailed timeline of the breach or mitigation steps, further fueling skepticism. Critics highlight the irony of an app designed to protect women from “dangerous men” failing to secure its own data. The breach also amplifies concerns about the growing reliance on generative AI in software development, with examples like SaaStr’s 2025 incident—where an AI agent deleted a company’s production database—demonstrating systemic risks [1].

For affected users, the exposure of personal identifiers and selfies poses significant risks, including targeted scams and social engineering attacks. Experts recommend monitoring accounts for suspicious activity and enrolling in credit monitoring services. The breach serves as a cautionary tale for apps targeting niche audiences, illustrating that even non-technical branding (e.g., “women-only safety”) cannot offset foundational security failures.

Source: [1] [Tea App Data Breach Exposes 72,000 User IDs](https://ktla.com/news/technology/ap-technology/ap-tea-an-app-for-women-to-safely-talk-about-men-they-date-has-been-breached-user-ids-exposed/) [2] [Tea App Leak Discussion Thread](https://boards.4chan.org/g/thread/106022356/tea-tea-app-leak-general-2) [3] [Decrypt’s Analysis of Tea’s Security Fail](https://decrypt.co/331961/tea-app-claimed-protect-women-exposes-72000-ids-epic-security-fail)

Sign up for free to continue reading

Unlimited access to AInvest.com and the AInvest app
Follow and interact with analysts and investors
Receive subscriber-only content and newsletters

By continuing, I agree to the
Market Data Terms of Service and Privacy Statement

Already have an account?

Comments



Add a public comment...
No comments

No comments yet

Disclaimer: The news articles available on this platform are generated in whole or in part by artificial intelligence and may not have been reviewed or fact checked by human editors. While we make reasonable efforts to ensure the quality and accuracy of the content, we make no representations or warranties, express or implied, as to the truthfulness, reliability, completeness, or timeliness of any information provided. It is your sole responsibility to independently verify any facts, statements, or claims prior to acting upon them. Ainvest Fintech Inc expressly disclaims all liability for any loss, damage, or harm arising from the use of or reliance on AI-generated content, including but not limited to direct, indirect, incidental, or consequential damages.