An icon of an eye to tell to indicate you can view the content by clicking
Signal
September 11, 2025

Massive AI App Data Breach: 116GB of User Information Left Exposed

Massive AI App Data Breach: 116GB of User Information Left Exposed

A significant security misconfiguration at Vyro AI has exposed 116 GB of sensitive user data from popular generative AI apps, highlighting growing concerns about data protection in the rapidly expanding AI industry.

What Happened?

The AI app developer behind ImagineArt, Chatly, and Chatbotx accidentally left an Elasticsearch server completely unprotected, allowing real-time access to user logs from both production and development environments. Cybernews researchers discovered the breach, which included up to a week's worth of sensitive information.

What Data Was Exposed?

The misconfigured database contained:

  • AI prompts and conversations from users
  • Bearer authentication tokens that could enable account takeovers
  • User agent information revealing device and browser details

Security experts warn this combination of data could enable attackers to monitor user behavior, steal personal information, and hijack accounts to illegitimately purchase AI tokens for malicious purposes.

Part of a Larger Pattern

This incident reflects broader security challenges in the AI industry. Recent problems include ChatGPT and Grok accidentally revealing user conversations in Google search results, and Expedia's AI chatbot providing dangerous instructions for creating weapons.

What This Means for Users

If you use AI-powered apps, regularly check your account activity and consider changing passwords on affected platforms. This breach demonstrates why it's crucial to review privacy settings and limit sensitive information shared with AI services.

The incident serves as a stark reminder that as AI adoption accelerates, companies must prioritize robust security measures to protect user data from increasingly sophisticated threats.

đź”— Read the full article on SC World