Skip to main content
Mallory
Mallory

Microsoft Warns Threat Actors Are Using Generative AI to Scale Cyberattacks and North Korean Fake Worker Schemes

generative aimalware developmentfake workersmicrosoft threat intelligencesocial engineeringdeepfakesjob scamsphishingnorth koreamedia manipulationremote workpersona generationimpersonationupworkfaceswap
Updated March 7, 2026 at 05:07 PM2 sources
Microsoft Warns Threat Actors Are Using Generative AI to Scale Cyberattacks and North Korean Fake Worker Schemes

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Microsoft Threat Intelligence reported that threat actors are increasingly using generative AI as a “force multiplier” across the cyberattack lifecycle—speeding up reconnaissance, phishing and social engineering, infrastructure setup, malware development/debugging, and post-compromise tasks such as summarizing stolen data and assisting with scripting. The report emphasizes that most observed malicious AI use today centers on language models for producing text, code, and media, reducing technical friction while human operators retain control over targeting and execution.

Microsoft highlighted North Korean activity as a prominent example, stating that groups it tracks as Jasper Sleet (Storm-0287), Coral Sleet (Storm-1877), and Sapphire Sleet are using AI to scale “fake remote worker” operations by rapidly generating realistic personas (names, resumes, communications) tailored to specific job markets and roles. Reported tactics include researching job postings (e.g., on Upwork) to align fabricated profiles with in-demand skills, using AI-generated multilingual lures that mimic internal corporate communications, and employing AI-enabled media manipulation such as Faceswap to insert operatives’ faces into stolen identity documents, alongside AI-driven impersonation and real-time voice modulation to improve social engineering and access persistence.

Related Stories

North Korean Threat Actors Use AI-Enabled IT Worker Scams and Target Crypto Firms

North Korean Threat Actors Use AI-Enabled IT Worker Scams and Target Crypto Firms

Microsoft-linked reporting says **North Korean threat actors** are using **AI** to scale and refine long-running “fake IT worker” schemes, where operatives pose as legitimate remote hires to obtain *authorized* access inside victim organizations. The activity is attributed to DPRK-linked clusters **Jasper Sleet** and **Coral Sleet**, with AI used to improve identity fabrication and maintenance (including face/voice manipulation) and to sustain day-to-day communications that help keep fraudulent personas credible, enabling “sustained, large-scale misuse of legitimate access.” Separately, reporting on suspected DPRK-linked intrusions describes a coordinated campaign against **cryptocurrency organizations** spanning staking platforms, exchange software providers, and exchanges, with theft of **source code, private keys, and cloud secrets**. Investigators described two primary access paths: exploitation of `CVE-2025-55182` in the *React2Shell* framework (including mass scanning and WAF-bypass techniques) and the use of **pre-obtained valid AWS access tokens** to move directly into cloud enumeration; researchers also recovered artifacts from attacker infrastructure (e.g., shell history, archived code, and tool configurations) that provided visibility into post-compromise activity and C2 setup.

1 weeks ago
AI-Enabled Phishing and Malware Delivery Trends

AI-Enabled Phishing and Malware Delivery Trends

Security researchers and industry commentary describe a broader rise in **AI-assisted cybercrime**, with attackers using generative AI to improve phishing lures, clone legitimate login pages, and scale social-engineering operations. Reporting highlights that phishing remains a leading initial access vector, while **phishing-as-a-service** and AI-generated content are making campaigns more convincing and easier to produce at volume. IBM similarly warns that AI is acting as a force multiplier for attackers, lowering the cost of malware development and enabling more disposable, harder-to-attribute malicious tooling. Kaspersky documented active campaigns in which threat actors used **Google Search ads** and fake documentation pages to distribute the **AMOS** infostealer on macOS and **Amatera** on Windows, disguising the malware as popular AI tools including **OpenClaw**, **Claude Code**, and **Doubao**. By contrast, ZDNET's article focuses on the business and product-security shortcomings of Moltbook and OpenClaw acquisitions rather than a specific threat campaign, making it adjacent but not part of the same security event. The material overall is **not fluff** because it includes substantive threat reporting and technical security analysis, even though the references describe related developments rather than one discrete incident.

4 days ago

Surge in AI-Driven Cybercrime and Fraud Tactics

Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage. Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.