AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
A viral women-only dating safety app called Tea has suffered a catastrophic data breach, exposing over 72,000 user records, including government-issued IDs, selfies, and private messages. The breach, uncovered by users on 4chan, revealed that the app’s backend database was left unsecured, lacking passwords, encryption, or authentication. The leaked data—spanning 59.3 GB—includes 13,000 verification selfies and IDs, tens of thousands of user-generated images, and messages from as recently as 2024 and 2025 [1]. These details contradict Tea’s previous claims that the breach involved only “old data,” underscoring a critical failure in its security infrastructure.
The app, which rose to 1 on the App Store with 4 million users, was marketed as a safe space for women to discuss romantic relationships. However, the breach has exposed sensitive information that could enable identity theft or harassment. Verification documents, such as government IDs, are now publicly searchable on decentralized platforms like BitTorrent, where automated scripts continue to spread the data even after the initial 4chan thread was deleted. Users’ private direct messages, a core feature of the app, were also compromised.
Tea’s security lapse has been attributed to “vibe coding,” a practice where developers rely on AI tools like ChatGPT to generate code without rigorous security reviews. The original hacker noted that Tea’s Firebase bucket was configured by default to be publicly accessible, lacking authentication [1]. This approach, while expedient, has left the app vulnerable to exploitation. Researchers from Georgetown University warn that 48% of AI-generated code contains exploitable flaws, yet 25% of Y Combinator startups use such code for core features [1]. Cybersecurity experts like Santiago Valdarrama have criticized the trend, emphasizing that AI-generated code often lacks the safeguards needed to prevent breaches [1].
The incident has sparked public backlash and raised questions about Tea’s compliance with data protection laws. The company has not disclosed a detailed timeline of the breach or mitigation steps, further fueling skepticism. Critics highlight the irony of an app designed to protect women from “dangerous men” failing to secure its own data. The breach also amplifies concerns about the growing reliance on generative AI in software development, with examples like SaaStr’s 2025 incident—where an AI agent deleted a company’s production database—demonstrating systemic risks [1].
For affected users, the exposure of personal identifiers and selfies poses significant risks, including targeted scams and social engineering attacks. Experts recommend monitoring accounts for suspicious activity and enrolling in credit monitoring services. The breach serves as a cautionary tale for apps targeting niche audiences, illustrating that even non-technical branding (e.g., “women-only safety”) cannot offset foundational security failures.
Source: [1] [Tea App Data Breach Exposes 72,000 User IDs](https://ktla.com/news/technology/ap-technology/ap-tea-an-app-for-women-to-safely-talk-about-men-they-date-has-been-breached-user-ids-exposed/) [2] [Tea App Leak Discussion Thread](https://boards.4chan.org/g/thread/106022356/tea-tea-app-leak-general-2) [3] [Decrypt’s Analysis of Tea’s Security Fail](https://decrypt.co/331961/tea-app-claimed-protect-women-exposes-72000-ids-epic-security-fail)
Daily stocks & crypto headlines, free to your inbox
By continuing, I agree to the
Market Data Terms of Service and Privacy Statement
Comments
No comments yet