Skip to main content
Mallory
Mallory

Privacy and Security Risks of AI Chatbots and Companion Apps

chatbotscompanion appsprivacy claimsAI ecosystemdata breachesAI featuresprivacysensitive dataadult interactionspersonal informationsecurityuser datasensitive contextsunauthorized interceptionidentity verification
Updated November 18, 2025 at 11:00 AM3 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

AI-powered chatbots and companion applications are raising significant privacy and security concerns as their adoption grows, particularly in sensitive contexts such as romantic or adult interactions. Legal experts highlight that recent litigation is testing how federal and state wiretapping and eavesdropping statutes apply to AI chatbots, with uncertainty over whether insurance policies will cover privacy-related claims. The legal landscape is evolving as courts distinguish between data collected by AI chatbots and traditional analytics tools, and organizations face new challenges in defending against claims of unauthorized interception of communications.

At the same time, the proliferation of AI companion apps and the introduction of adult-oriented features by major platforms like OpenAI's ChatGPT have led to increased requirements for age and identity verification. This has resulted in the collection and storage of sensitive personal information, such as government-issued IDs, which has already been targeted in several high-profile data breaches. Research indicates that a significant portion of users, including minors, are sharing personal information with these bots, and recent incidents have exposed hundreds of thousands of users' data due to misconfigured systems. These developments underscore the urgent need for robust privacy protections and security controls in the rapidly expanding AI chatbot ecosystem.

Sources

November 17, 2025 at 12:00 AM
November 17, 2025 at 12:00 AM

Related Stories

AI Chatbots in Healthcare Raise Security and Governance Concerns

AI Chatbots in Healthcare Raise Security and Governance Concerns

The deployment of AI-powered chatbots in healthcare is raising significant concerns among governance analysts and security experts. With the recent launch of ChatGPT Health by OpenAI, users can now connect medical records and wellness apps to receive personalized health guidance, a service reportedly used by over 230 million people weekly. Google has also entered the space through a partnership with health data platform b.well, indicating a trend toward broader adoption of AI-driven health advice. Experts warn that while some AI errors are obvious, others—such as plausible but potentially dangerous recommendations—may go undetected, especially for vulnerable populations. The lack of regulatory oversight and the inherent limitations of large language models, which generate authoritative-sounding responses without true understanding or uncertainty calibration, amplify these risks. Security professionals highlight the concept of "verification asymmetry," where users may be unable to distinguish between accurate and harmful advice generated by AI chatbots. This asymmetry, combined with the probabilistic nature of AI models, means that failures can be subtle and difficult to detect, potentially leading to adverse health outcomes. The rapid integration of AI into healthcare underscores the urgent need for robust governance, transparency, and safety mechanisms to mitigate risks associated with automated medical guidance and the handling of sensitive health data.

2 months ago
AI Chatbot Data Exposure and Institutional Restrictions Driven by Privacy and Security Risk

AI Chatbot Data Exposure and Institutional Restrictions Driven by Privacy and Security Risk

A misconfiguration in *Firebase* exposed nearly **300 million** private messages from roughly **25 million** users of the AI chatbot app **Chat & Ask AI**, after the app’s Firebase `Security Rules` were left publicly accessible. Reporting indicates the exposed data included full chat histories, bot names, and highly sensitive user prompts (including self-harm and potentially unlawful activity discussions); the issue was reported to developer **Codeway** by a researcher who also claimed to have identified similar inadvertent exposure across **103** other iOS apps, underscoring how common cloud-database misconfigurations remain as AI features are embedded into consumer applications. Separately, the **European Parliament** restricted lawmakers’ use of built-in AI tools on work devices, citing cybersecurity and privacy concerns about uploading confidential correspondence to external cloud services and uncertainty over how uploaded data may be stored, reused for model improvement, or accessed under non-EU legal authorities. In healthcare, ECRI Institute researchers warned that **AI chatbots** represent a leading 2026 health technology hazard due to safety, security, and privacy risks—particularly because many tools are not validated for clinical use—while also highlighting that IT outages (including those caused by cyberattacks) and legacy medical device issues remain major operational and patient-safety threats.

3 weeks ago

Privacy Concerns Over AI Training Data and Chatbot Adoption Risks

The rapid adoption of generative AI chatbots, such as ChatGPT, is transforming both consumer and enterprise environments, with significant growth in usage and market value. These chatbots are being used for a wide range of applications, from customer service to code generation and mental health support. However, their increasing prevalence raises concerns about risks such as hallucinations, dangerous suggestions, and the need for robust guardrails to ensure safe deployment and use. Simultaneously, privacy concerns have emerged regarding how major technology companies, like Google, may use personal data to train AI models. Google recently denied allegations that it analyzes private Gmail content to train its Gemini AI model, following a class action lawsuit and public confusion over changes in Gmail's smart features settings. The company clarified that while smart features have existed for years, Gmail content is not used for AI model training, and any changes to terms or policies would be communicated transparently. These developments highlight the ongoing tension between AI innovation, user privacy, and the need for clear communication about data usage.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.