An icon of an eye to tell to indicate you can view the content by clicking
September 11, 2025

Major AI App Developer Exposes 116 GB of User Data Through Server Misconfiguration

A massive data exposure has hit the AI app industry as Vyro AI, developer of popular Android and iOS applications, accidentally exposed 116 GB of sensitive user logs through an unprotected Elasticsearch server. The breach affected users of ImagineArt, Chatly, and Chatbotx apps, highlighting critical security gaps in AI platforms.

What Data Was Exposed?

The misconfigured database contained up to a week's worth of sensitive information from both production and development environments, including:

  • AI prompts and conversations - Complete chat histories and generated content requests
  • Bearer authentication tokens - Credentials that could enable account takeovers
  • User agent information - Device and browser details for behavior tracking

Security Risks for Users

According to Cybernews researchers who discovered the breach, the exposed data creates multiple attack vectors. Cybercriminals could exploit this information for user behavior monitoring, data theft, and account hijacking.

"Takeovers may result in access to full chat history, access to generated images, or could be abused to illegitimately purchase AI tokens, which could later be used for malicious purposes," the researchers warned.

Growing Pattern of AI Security Failures

This incident reflects broader security challenges in the AI industry. Recent discoveries show ChatGPT and Grok inadvertently revealed user conversations on Google search, while Expedia's AI chatbot was caught providing dangerous information about creating Molotov cocktails.

The breach underscores the urgent need for better AI safeguards and proper security configurations as these platforms handle increasingly sensitive user data.

Read the full article on SC World