MyLovelyAI data hack prompts exposed

A Love Letter Gone Wrong: MyLovely.ai Breach Exposes More Than Just Digital Hearts

MyLovelyAI data hack prompts exposed

In what must surely qualify as the least surprising cybersecurity development of 2026, MyLovely.ai—an AI “girlfriend” platform specializing in not-safe-for-work content—has suffered a data breach that somehow managed to expose 113,000 explicit user prompts. About 70,000 of those prompts were helpfully tied to specific user IDs, turning what should have been private bedroom fantasies into searchable database entries for cybercriminals.who could have foreseen putting one’s deepest sexual prompts into a cloud-based AI service might end badly?

The breach, which security researchers say affected over 106,000 registered users, spilled a 2.1 GB JSON database onto a popular cybercrime forum like a dropped tray of intimate photographs. The haul included the usual suspects—email addresses, social media handles, subscription tiers—but also the coup de grâce: direct links to generated explicit images and the exact text prompts users submitted to create them. Yes, every carefully-worded request for pixelated companionship is now potentially searchable, sortable, and most importantly, linkable to real-world identities.

For those keeping score at home, this is merely the latest installment in what appears to be a burgeoning genre of “AI girlfriend service gets hacked, users’ fantasies become public record.” Malwarebytes noted this is “yet another data breach affecting an AI girlfriend service,” which raises the question of whether these platforms are being built with security in mind or simply duct-taped together rapidly enough to capitalize on market demand before the inevitable collapse.

The implications are predictably grim. With email addresses, Discord usernames, and X handles bundled alongside explicit content and the prompts used to generate it, affected users face the exciting prospect of sextortion schemes that don’t just threaten to expose that someone visited the site, but precisely what they asked for—an altogether more mortifying prospect. Cybercriminals now possess the raw materials to correlate specific sexual interests and AI-generated content with identifiable email accounts and social media profiles.

Perhaps the most inexplicable detail? The dataset included “content moderation reports”—suggesting not only were users’ activities logged, but someone was actively reviewing them. Nothing says “intimate digital companion” quite like knowing your AI girlfriend is forwarding your conversation logs to a moderation queue.

The advice from security professionals is weary and familiar: don’t trust platforms promising privacy “just because they say so,” avoid using real email addresses or social credentials for sensitive services, and remember that anything uploaded online carries the risk of becoming public. It’s sensible guidance that will undoubtedly be ignored by the next wave of users flocking to the next poorly-secured AI intimacy platform.

Because if there’s one thing history has taught us, it’s that the promise of algorithmic companionship will always outweigh the risk of one’s explicit prompt history appearing on a Russian hacker forum. Love, after all, is blind—though in this case, perhaps it should have been paying a bit more attention to encryption protocols.