Google Chrome Expands Gemini and On-Device AI Features, Including New Controls for Scam Detection Models
Google is testing deeper Gemini integration in Chrome via a new internal feature called “Skills,” which appears to let users define named, instruction-based automations that Gemini can execute inside the browser. The feature is surfaced through a new chrome://skills page and aligns with Google’s stated direction of turning Gemini into a more agent-like assistant capable of acting across tabs and, over time, integrating more tightly with Google services.
Separately, Google has added user controls to manage the on-device GenAI model used by Chrome’s Enhanced Protection (Safe Browsing) capabilities, which were previously upgraded with AI for “real-time” detection of dangerous sites, downloads, and potentially malicious extensions. In Chrome Canary, users can disable On-device GenAI under Chrome → Settings → System, which also enables deletion of the local model; Google indicated the local model may support additional security and browser features beyond scam detection as it rolls out more broadly.
Related Entities
Organizations
Sources
Related Stories
Google Chrome Gemini AI Agent Enhanced to Counter Prompt Injection Attacks
Google has acknowledged the significant risk of prompt injection attacks targeting its Gemini-powered Chrome browsing agent, which can be manipulated to perform unauthorized actions such as initiating financial transactions or exfiltrating sensitive data. In response, Google has introduced a second AI model, termed the 'user alignment critic,' designed to independently vet the agent's proposed actions before execution. This model operates in isolation from untrusted web content, providing an additional layer of defense against both goal hijacking and data leakage. The move comes as prompt injection has been identified as a leading vulnerability in AI systems, with industry bodies like OWASP and the UK's National Cyber Security Centre highlighting its prevalence and difficulty to mitigate due to the structural limitations of large language models. The Gemini-powered browsing agent, currently in preview, is capable of navigating websites, clicking buttons, and filling forms while users are logged into sensitive accounts, increasing the potential impact of successful attacks. Security experts and analysts have emphasized the need for robust safeguards, as malicious instructions can be hidden in web pages, iframes, or user-generated content. Google's dual-model approach aims to address these concerns by ensuring that any action not aligned with the user's intent is blocked, thereby reducing the risk of exploitation through prompt injection. The development reflects a broader industry trend of reassessing the security of AI-driven browsers and the need for advanced countermeasures to protect users and organizations from emerging threats.
3 months ago
Chrome Gemini Live Panel Hijacking via Malicious Extensions (CVE-2026-0628)
Palo Alto Networks Unit 42 disclosed a **high-severity Google Chrome vulnerability** in the new **Gemini Live in Chrome** side panel, tracked as **CVE-2026-0628**, that could have allowed **malicious browser extensions with only basic permissions** to hijack the Gemini panel and effectively “tap into” the browser environment. The reported impact included **privilege escalation** enabling access to sensitive resources such as the victim’s **camera and microphone**, the ability to **take screenshots of any website**, and access to **local files and directories**. Unit 42 reported responsible disclosure to Google and stated that Google shipped a fix in **early January** ahead of public disclosure. Dark Reading coverage echoed Unit 42’s findings, emphasizing that the flaw highlights emerging risks in **agentic/AI-enabled browsers** where AI side panels run with elevated capabilities, and that enterprise environments face amplified exposure if users install untrusted extensions. Separate reporting described unrelated supply-chain activity affecting developer and browser extensions: Socket reported suspicious, non-repository code added to **Aqua Trivy’s VS Code extension** on **OpenVSX** (versions `1.8.12`/`1.8.13`) that attempted to invoke local AI coding assistants and exfiltrate/report data, while Rescana detailed a **QuickLens Chrome extension** takeover used for credential/crypto theft and a **ClickFix** social-engineering technique; these are distinct incidents from CVE-2026-0628 but reinforce the broader risk of extension ecosystems.
1 weeks agoAdversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents
Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations. The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.
4 months ago