Mobile App and Malware Risks: AI Chat Data Exposure and New Android/iOS Spyware
The Chat & Ask AI mobile app reportedly exposed a large volume of private user conversations due to an insecure Google Firebase backend configuration that allowed unauthorized access by effectively letting anyone present as an “authenticated” user. Reporting indicates the exposed dataset included roughly 300 million messages tied to 25+ million users, with logs containing full chat histories, timestamps, user-defined AI companion names, and model/configuration details (e.g., use of ChatGPT/Claude/Gemini via the wrapper app). Sampled content included highly sensitive and potentially harmful queries (e.g., self-harm, illegal drug manufacturing, and hacking), creating significant privacy and safety risk even though the underlying third-party AI model providers were not reported as compromised.
Separately, researchers described two emerging mobile malware threats: ZeroDayRAT, a commercially advertised spyware platform sold via Telegram that claims broad Android/iOS support and provides operators with extensive device telemetry and hands-on capabilities (e.g., notification/SMS capture, OTP interception for 2FA bypass, keylogging, camera/microphone activation, GPS tracking, and crypto theft modules); and GhostChat, an Android malware family distributed as trojanized APKs mimicking popular chat apps (including WhatsApp) that injects into app processes to intercept messages, steal credentials, and exfiltrate contacts/media. An unrelated item reported that AI.com’s Super Bowl-driven traffic surge caused a service outage attributed to Google rate limiting, but it did not describe a security incident or compromise.
Sources
Related Stories

Mobile malware and phishing campaigns abuse AI branding and Android tooling to steal credentials and surveil victims
Multiple mobile-focused threats were reported spanning **Android banking malware**, **iOS credential-harvesting via App Store listings**, and **Android espionage via trojanized crisis apps**. A new Android banking trojan marketed as **Mirax Bot** was advertised on underground forums as a **Malware-as-a-Service (MaaS)** offering, with claimed capabilities including **700+ app injects**, **Hidden VNC (HVNC)** for stealthy remote control, and features positioned for **account takeover (ATO)** and large-scale financial fraud; researchers noted the feature list is based on seller claims and not yet independently verified. Separately, researchers described **PromptSpy**, characterized as an Android threat that uses **generative-AI techniques** to improve phishing and fraud by generating more convincing social-engineering content and automating deceptive interactions on-device. In parallel, a phishing operation targeted iPhone users by impersonating **ChatGPT** and **Google Gemini** in emails that directed victims to **fraudulent iOS apps hosted on Apple’s App Store**; the apps (including *GeminiAI Advertising* `id6759005662` and *Ads GPT* `id6759514534`) presented a fake **Facebook login** flow to harvest credentials. Another campaign, **RedAlert**, weaponized a trojanized version of Israel’s “Red Alert” emergency app distributed as `RedAlert.apk` via **SMS phishing (smishing)**, pushing victims to sideload the APK; analysis reported the app mimicked the legitimate interface while requesting high-risk permissions (e.g., **SMS**, contacts, precise **GPS**) consistent with covert surveillance and data theft. A separate Kaspersky post focused on consumer guidance for disabling AI assistants and broader privacy concerns, and does not materially add incident-specific threat intelligence to the mobile malware/phishing reporting.
1 weeks ago
AI Chatbot Data Exposure and Institutional Restrictions Driven by Privacy and Security Risk
A misconfiguration in *Firebase* exposed nearly **300 million** private messages from roughly **25 million** users of the AI chatbot app **Chat & Ask AI**, after the app’s Firebase `Security Rules` were left publicly accessible. Reporting indicates the exposed data included full chat histories, bot names, and highly sensitive user prompts (including self-harm and potentially unlawful activity discussions); the issue was reported to developer **Codeway** by a researcher who also claimed to have identified similar inadvertent exposure across **103** other iOS apps, underscoring how common cloud-database misconfigurations remain as AI features are embedded into consumer applications. Separately, the **European Parliament** restricted lawmakers’ use of built-in AI tools on work devices, citing cybersecurity and privacy concerns about uploading confidential correspondence to external cloud services and uncertainty over how uploaded data may be stored, reused for model improvement, or accessed under non-EU legal authorities. In healthcare, ECRI Institute researchers warned that **AI chatbots** represent a leading 2026 health technology hazard due to safety, security, and privacy risks—particularly because many tools are not validated for clinical use—while also highlighting that IT outages (including those caused by cyberattacks) and legacy medical device issues remain major operational and patient-safety threats.
3 weeks ago
Android Malware and Spyware Campaigns Using Trusted Platforms and Social Engineering Lures
Two separate Android-focused threat operations were reported, both relying on social engineering to drive manual installation of malicious apps. Bitdefender documented a campaign that abuses **Hugging Face** as a trusted hosting/CDN distribution point for an Android credential-stealing payload targeting popular financial and payment services. Victims are lured into installing a dropper app named **TrustBastion** via scareware-style ads; after installation it displays a fake Google Play “mandatory update” flow, then contacts infrastructure associated with `trustbastion[.]com` which redirects to a Hugging Face dataset repository hosting the final APK. The actor used **server-side polymorphism** to generate new payload variants roughly every 15 minutes, resulting in thousands of variants and rapid repository churn (reported as >6,000 commits over ~29 days); after takedown, the operation reportedly resurfaced under a new name (“**Premium Club**”) with refreshed branding. ESET separately identified an Android spyware campaign tracked as **GhostChat** that uses **romance-scam** tactics to target individuals in Pakistan. The malicious app is disguised as a chat/dating service but primarily functions as a surveillance tool; it presents “locked” female profiles with passcodes (hardcoded in the app) to create a sense of exclusivity, then routes victims into WhatsApp chats tied to Pakistani numbers likely controlled by the operator. The app was distributed via unofficial sources (not Google Play) and is blocked by Google Play Protect by default; ESET also linked the same actor to a broader surveillance effort including a **ClickFix** compromise chain and a WhatsApp device-linking attack, using websites impersonating Pakistani government organizations as lures.
1 months ago