Skip to main content
Mallory
Mallory

AI Chatbot Data Exposure and Institutional Restrictions Driven by Privacy and Security Risk

data exposureinstitutional restrictionschatbotsprivacysecurity rulesdatabase exposureprivate messagesai toolsit outagesios appsmisconfiguration
Updated February 19, 2026 at 07:03 PM2 sources
AI Chatbot Data Exposure and Institutional Restrictions Driven by Privacy and Security Risk

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A misconfiguration in Firebase exposed nearly 300 million private messages from roughly 25 million users of the AI chatbot app Chat & Ask AI, after the app’s Firebase Security Rules were left publicly accessible. Reporting indicates the exposed data included full chat histories, bot names, and highly sensitive user prompts (including self-harm and potentially unlawful activity discussions); the issue was reported to developer Codeway by a researcher who also claimed to have identified similar inadvertent exposure across 103 other iOS apps, underscoring how common cloud-database misconfigurations remain as AI features are embedded into consumer applications.

Separately, the European Parliament restricted lawmakers’ use of built-in AI tools on work devices, citing cybersecurity and privacy concerns about uploading confidential correspondence to external cloud services and uncertainty over how uploaded data may be stored, reused for model improvement, or accessed under non-EU legal authorities. In healthcare, ECRI Institute researchers warned that AI chatbots represent a leading 2026 health technology hazard due to safety, security, and privacy risks—particularly because many tools are not validated for clinical use—while also highlighting that IT outages (including those caused by cyberattacks) and legacy medical device issues remain major operational and patient-safety threats.

Related Stories

AI Chatbot Security Risks: Prompt Injection Data Exfiltration and Privacy Trade-offs in New Consumer Tiers

AI Chatbot Security Risks: Prompt Injection Data Exfiltration and Privacy Trade-offs in New Consumer Tiers

Researchers disclosed an **indirect prompt injection** technique against **Google Gemini** that used a malicious **Google Calendar invite** to bypass guardrails and exfiltrate private meeting details. By embedding a hidden natural-language payload in an event description, an attacker could cause Gemini—when later asked an innocuous scheduling question—to summarize a user’s private meetings and write that summary into a newly created calendar event; in many enterprise configurations, that new event could be visible to the attacker, enabling data theft without additional user interaction. The issue was reported as remediated after responsible disclosure, underscoring how AI assistants integrated with enterprise SaaS can create new cross-application data-extraction paths. Separately, OpenAI product rollouts raised enterprise data-handling concerns tied to consumer usage. **ChatGPT Go** (a low-cost tier) was described as introducing an **ad-supported** model that could increase exposure of conversation data and usage patterns to advertising ecosystems, amplifying “shadow AI” risk when employees use personal accounts for work. **ChatGPT Health** was positioned as a dedicated health experience with added protections (e.g., encryption/isolation and claims that user data is not used to train foundation models), but reporting highlighted unresolved questions around safety, privacy, and how sensitive health information is protected in practice—areas that may require additional governance and controls if employees adopt these tools outside approved enterprise channels.

1 months ago

Privacy and Security Risks of AI Chatbots and Companion Apps

AI-powered chatbots and companion applications are raising significant privacy and security concerns as their adoption grows, particularly in sensitive contexts such as romantic or adult interactions. Legal experts highlight that recent litigation is testing how federal and state wiretapping and eavesdropping statutes apply to AI chatbots, with uncertainty over whether insurance policies will cover privacy-related claims. The legal landscape is evolving as courts distinguish between data collected by AI chatbots and traditional analytics tools, and organizations face new challenges in defending against claims of unauthorized interception of communications. At the same time, the proliferation of AI companion apps and the introduction of adult-oriented features by major platforms like OpenAI's ChatGPT have led to increased requirements for age and identity verification. This has resulted in the collection and storage of sensitive personal information, such as government-issued IDs, which has already been targeted in several high-profile data breaches. Research indicates that a significant portion of users, including minors, are sharing personal information with these bots, and recent incidents have exposed hundreds of thousands of users' data due to misconfigured systems. These developments underscore the urgent need for robust privacy protections and security controls in the rapidly expanding AI chatbot ecosystem.

3 months ago
AI Chatbots in Healthcare Raise Security and Governance Concerns

AI Chatbots in Healthcare Raise Security and Governance Concerns

The deployment of AI-powered chatbots in healthcare is raising significant concerns among governance analysts and security experts. With the recent launch of ChatGPT Health by OpenAI, users can now connect medical records and wellness apps to receive personalized health guidance, a service reportedly used by over 230 million people weekly. Google has also entered the space through a partnership with health data platform b.well, indicating a trend toward broader adoption of AI-driven health advice. Experts warn that while some AI errors are obvious, others—such as plausible but potentially dangerous recommendations—may go undetected, especially for vulnerable populations. The lack of regulatory oversight and the inherent limitations of large language models, which generate authoritative-sounding responses without true understanding or uncertainty calibration, amplify these risks. Security professionals highlight the concept of "verification asymmetry," where users may be unable to distinguish between accurate and harmful advice generated by AI chatbots. This asymmetry, combined with the probabilistic nature of AI models, means that failures can be subtle and difficult to detect, potentially leading to adverse health outcomes. The rapid integration of AI into healthcare underscores the urgent need for robust governance, transparency, and safety mechanisms to mitigate risks associated with automated medical guidance and the handling of sensitive health data.

2 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.