Skip to main content
Mallory
Mallory

AI Assistants Expand Personalization and Data Access, Raising Privacy and Integrity Risks

data accesspersonalizationprivacyai integritytrust degradationai-generated contentdata minimizationsafety reviewtemporary chatcontent trapsgmailgoogle photosai mode
Updated January 26, 2026 at 11:08 AM3 sources
AI Assistants Expand Personalization and Data Access, Raising Privacy and Integrity Risks

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Google is rolling out AI Mode personalization that can connect Google Search to Gmail and Google Photos for opt-in users, aiming to deliver more tailored results based on personal context. The feature is positioned as “secure” and is initially available via Labs for Google AI Pro and AI Ultra subscribers (with limited account eligibility), with Google stating the system processes data for specific prompts and does not directly train on a user’s inbox or photo library; the change nonetheless increases the amount of sensitive personal data that can be accessed during AI-assisted search workflows.

OpenAI is testing an upgrade to ChatGPT Temporary Chat that keeps the session from being saved to history or used for model improvement, while still allowing personalization signals (e.g., memory/style preferences) to apply—alongside a stated retention window where OpenAI may keep a copy for up to 30 days for safety. Separately, researchers and commentators warned about an “Ouroboros effect” where ChatGPT may cite AI-generated repositories such as xAI’s Grokipedia, increasing the risk of misinformation loops and “content traps” if AI systems do not rigorously vet sources, potentially degrading trust and decision-making even without direct training on the cited content.

Related Entities

Organizations

Affected Products

Related Stories

AI Chatbot Security Risks: Prompt Injection Data Exfiltration and Privacy Trade-offs in New Consumer Tiers

AI Chatbot Security Risks: Prompt Injection Data Exfiltration and Privacy Trade-offs in New Consumer Tiers

Researchers disclosed an **indirect prompt injection** technique against **Google Gemini** that used a malicious **Google Calendar invite** to bypass guardrails and exfiltrate private meeting details. By embedding a hidden natural-language payload in an event description, an attacker could cause Gemini—when later asked an innocuous scheduling question—to summarize a user’s private meetings and write that summary into a newly created calendar event; in many enterprise configurations, that new event could be visible to the attacker, enabling data theft without additional user interaction. The issue was reported as remediated after responsible disclosure, underscoring how AI assistants integrated with enterprise SaaS can create new cross-application data-extraction paths. Separately, OpenAI product rollouts raised enterprise data-handling concerns tied to consumer usage. **ChatGPT Go** (a low-cost tier) was described as introducing an **ad-supported** model that could increase exposure of conversation data and usage patterns to advertising ecosystems, amplifying “shadow AI” risk when employees use personal accounts for work. **ChatGPT Health** was positioned as a dedicated health experience with added protections (e.g., encryption/isolation and claims that user data is not used to train foundation models), but reporting highlighted unresolved questions around safety, privacy, and how sensitive health information is protected in practice—areas that may require additional governance and controls if employees adopt these tools outside approved enterprise channels.

1 months ago

Privacy Concerns Over AI Training Data and Chatbot Adoption Risks

The rapid adoption of generative AI chatbots, such as ChatGPT, is transforming both consumer and enterprise environments, with significant growth in usage and market value. These chatbots are being used for a wide range of applications, from customer service to code generation and mental health support. However, their increasing prevalence raises concerns about risks such as hallucinations, dangerous suggestions, and the need for robust guardrails to ensure safe deployment and use. Simultaneously, privacy concerns have emerged regarding how major technology companies, like Google, may use personal data to train AI models. Google recently denied allegations that it analyzes private Gmail content to train its Gemini AI model, following a class action lawsuit and public confusion over changes in Gmail's smart features settings. The company clarified that while smart features have existed for years, Gmail content is not used for AI model training, and any changes to terms or policies would be communicated transparently. These developments highlight the ongoing tension between AI innovation, user privacy, and the need for clear communication about data usage.

3 months ago
OpenAI Adds ChatGPT Lockdown Mode and Elevated Risk Labels to Reduce Prompt-Injection Exfiltration

OpenAI Adds ChatGPT Lockdown Mode and Elevated Risk Labels to Reduce Prompt-Injection Exfiltration

OpenAI introduced **Lockdown Mode** and **Elevated Risk** labels in *ChatGPT* to reduce exposure to **prompt injection** and related data-exfiltration risks when AI features interact with external systems. Lockdown Mode is positioned as an optional, advanced setting for higher-risk users and environments (notably *ChatGPT Enterprise*, *Edu*, *for Healthcare*, and *for Teachers*) that restricts tool access and limits how ChatGPT can reach outside systems; reported controls include disabling or constraining capabilities attackers could abuse via conversations or connected apps, and limiting browsing so that no live network requests leave OpenAI-controlled infrastructure (with browsing constrained to cached content). Admins can enable the setting via workspace controls and apply additional restrictions through dedicated roles, while Elevated Risk labels provide in-product warnings and guidance for features that increase risk when connecting to apps or the web, including across *ChatGPT*, *ChatGPT Atlas*, and *Codex*. Separate research highlighted how AI assistants with web-browsing/URL-fetching features can be abused as stealthy **command-and-control (C2) relays**, demonstrating a technique against **Microsoft Copilot** and **xAI Grok** that tunnels operator commands and victim data through legitimate AI web interfaces and can work without an API key or registered account. In parallel, the **European Parliament** reportedly disabled built-in AI tools on lawmakers’ work devices due to cybersecurity and privacy concerns about uploading sensitive correspondence to third-party cloud AI providers and uncertainty about what data is shared and retained. Other referenced material focused on general productivity customization of ChatGPT via “Custom Instructions,” rather than a specific security event or disclosure.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.