This Week's Pattern: Forgotten Access.
ShinyHunters hit Zara this week using an API key Inditex had issued to a vendor — and never revoked. The vendor relationship ended 11 months ago. The credential didn't. 197,000 customer records walked out the front door. Compromise the vendor, ride the trust nobody remembered to cut, walk out with the data.
Two weeks ago this same gang took 275 million records out of Instructure Canvas the same way. Different victim. Same playbook.
That pattern repeats across this week's incidents. A vulnerability convention named TrustFall turns out to affect every AI coding assistant that auto-executes code — Claude Code, Cursor CLI, Gemini CLI, GitHub Copilot CLI. A malicious npm hook quietly siphons OAuth tokens through your Claude Code MCP integrations. NVIDIA's own AI agent sandbox is shown leaking data through legitimate-looking model calls. The trust boundary you drew at your firewall, your SSO, your sandbox, your CI runner — attackers are no longer trying to break those boundaries. They're walking around them through whichever vendor, agent, or assistant you forgot to govern.
Here's what happened, why it matters, and what RuntimeAI enforces against each class.
Major Breach
1 — Zara: 197,000 Customers Leaked via Anodot 3rd-Party SaaS Token Compromise
Inditex — parent of Zara — disclosed unauthorized access to a database held by a former technology provider. The compromised provider was Anodot, an analytics platform whose authentication tokens were still valid in Inditex's environment after the relationship ended. ShinyHunters used those stale tokens to exfiltrate 197,400 customer records — email addresses, order IDs, product SKUs, geographic locations, purchase history, and customer support tickets. The data was listed on the gang's dark web leak portal with an April 21 deadline. Vimeo, Rockstar Games, and McGraw Hill were hit by the same campaign.
This is the same ShinyHunters extortion arc we covered two weeks ago when they took 275M Canvas records — same gang, same broad pattern, different attack surface. Canvas was a SaaS compromise. Zara is a vendor compromise — a stale token from a relationship that ended, never rotated, still trusted. The thing every CISO will recognize: nobody knew the token was still valid until 197K customer records were on a leak site.
The data set on its own looks moderate — no passwords, no card numbers, no addresses. But email + purchase history + geography is the exact bait kit for highly targeted spear-phishing. Recipients will receive "your order #1234567 shipped" messages with matching SKUs and city names. The actual loss isn't this breach; it's the next round of phishing seeded from this breach.
Most Advanced AI Security Zero Trust · Defence in Depth
When the vendor you trusted three years ago still has live tokens against your data, the question is what those tokens can reach. RuntimeAI assumes vendor tokens leak — and constrains what they can do:
- Discovery — Non-Human Identity inventory: RuntimeAI's NHI Security continuously inventories every machine token, API key, service principal, and OAuth grant in your environment — including dormant ones from offboarded vendors. The Anodot integration would have shown up in the inventory with "last used 11 months ago, scope = full read on customer DB" — and an automatic decommission policy would have killed it long before ShinyHunters did.
- Behavioural enforcement — KYA scope policies: Know Your Agent (KYA) issues every machine identity a scoped credential bound to the originating workload. A vendor-side token attempting full-table reads from outside its declared scope is rejected at the data plane — before the first 1,000 records are exfiltrated.
- Flow / egress control — bulk-read circuit breaker: The Flow Enforcer baselines normal read rates per identity. A previously dormant token suddenly pulling 197K records triggers the kill switch — the session is quarantined within seconds and the responsible identity is reported to the SOC.
- Immutable audit trail — PQ-signed forensics: Every action the Anodot token took is recorded in the Audit Black Box with quantum-resistant signatures. When regulators ask "exactly which records left and when," the answer is cryptographically provable in court.
The vendor relationship ended. The trust didn't. RuntimeAI catches the forgotten access before it becomes the next leak.
AI Runtime Exploits
2 — TrustFall: Code Execution Risk in Claude Code, Cursor, Gemini CLI, Copilot CLI
Researchers demonstrated that malicious repositories can trigger code execution inside Claude Code, Cursor CLI, Gemini CLI, and GitHub Copilot CLI with minimal or no user interaction — thanks to skimpy convention-based trust assumptions across every major AI coding assistant. The class of bug was named TrustFall. An attacker who controls a repo a developer clones, or a dependency a developer's repo pulls, can execute arbitrary code inside the developer's IDE-adjacent AI agent.
This is the AI-coding equivalent of a CVE-affecting-every-shell-you-use. The "AI coder reads my repo" trust assumption is the same one Linux made about .bashrc twenty-five years ago — and the same one IDEs eventually learned to gate. AI coding tools are reproducing those mistakes at speed, because their value comes from operating without friction. TrustFall puts that friction-free model directly in conflict with code-execution safety.
If you have any Claude Code, Cursor, Gemini CLI, or Copilot CLI user in your engineering org — and you probably have hundreds — every clone they perform is now a potential RCE vector with a developer-laptop blast radius (and from there, into your source code, CI tokens, and cloud credentials).
Most Advanced AI Security How RuntimeAI Stops This
The TrustFall class is not patchable in a single CVE — the convention assumptions are baked into how AI coders work. The defence is to assume the agent will execute hostile input, and contain what it can reach:
- Discovery — every AI assistant inventoried: RuntimeAI's NHI Security and AI agent inventory catalog every AI coding tool installed in the org, with version, scope, and per-developer count. A TrustFall-affected version can be flagged at fleet level the day the disclosure lands — without waiting for endpoint MDM to catch up.
- Behavioural enforcement — agent scope policies: Claude Code, Cursor, Gemini, and Copilot CLI are wrapped by Flow Enforcer policies that constrain what shells, networks, and credential stores they can reach. Even a fully-compromised AI coder can't shell out to
~/.aws/credentials, can't call internal APIs without a KYA token, and can't egress to attacker infrastructure. - Flow / egress control — outbound destination policy: Egress from any AI coding tool is gated to approved destinations (the model provider, the package registry on allowlist). Outbound calls to a TrustFall command-and-control server are blocked at the workload boundary, not the perimeter.
- Immutable audit trail — every agent action logged: The Audit Black Box records every shell command, file access, and outbound call performed by every AI coder per developer. When TrustFall lands in your environment, you have the forensic record to answer "what did the assistant touch" within minutes, not weeks.
You don't have to choose between AI-coder productivity and AI-coder safety. You enforce a runtime perimeter around the agent itself.
3 — The Gentlemen RaaS Gang Doxxed via Their Own Leaked Data
The Gentlemen — a Ransomware-as-a-Service operator — had their own affiliate dashboard misconfigured and indexed by a search engine. Researchers obtained operator identities, payout flows, victim lists, and chat logs. The same group that built a business around leaking other people's data found itself on the wrong end of an exposure.
This is the cybercrime equivalent of an HR-systems vendor getting their own HR system breached. The lesson isn't schadenfreude (well, not only schadenfreude). The lesson is that the people running the most disciplined offensive operations also get their cloud-misconfig fundamentals wrong. If The Gentlemen can leak via a public dashboard, every enterprise can.
Most Advanced AI Security Why RuntimeAI Customers Are Protected
The Gentlemen story is a reminder that misconfigured admin surfaces are the most consistent source of breach — even for sophisticated operators. RuntimeAI assumes you will misconfigure something:
- Discovery — public-exposure scanner: Cloud Security continuously scans your perimeter for accidentally-public dashboards, S3 buckets, object stores, and admin endpoints. A Gentlemen-style "admin panel indexed by Google" surfaces in the first hourly scan, not the first attacker.
- Behavioural enforcement — anomalous access patterns: Even if the dashboard is briefly public, KYA + Flow Enforcer require valid agent identity for any state-changing request. An anonymous browser hitting the admin URL gets a 401, not the dashboard.
- Flow / egress control — credential-store gating: Sensitive admin paths (operator lists, payout flows, chat archives) sit behind PII Shield + tokenization. Even if the page renders, the data inside is opaque without the KYA-authorized session.
- Immutable audit trail — exposure-time evidence: The Audit Black Box records the moment the misconfig happened, who deployed it, and every external access attempt while it was live. Faster remediation, cleaner post-mortem.
You don't have to be perfect. You have to make the inevitable misconfiguration small and brief.
Agent Identity & Supply Chain
4 — Claude Code MCP Attack: Persistent OAuth Token Theft via npm Hooks
Researchers demonstrated a Claude Code attack that steals OAuth tokens through malicious MCP integrations and npm install hooks. The attack chain: developer installs an npm package; the package's postinstall hook adds a malicious MCP server registration to Claude Code's config; subsequent Claude Code sessions silently exfiltrate the developer's OAuth tokens via the rogue MCP server. The compromise persists across sessions and survives Claude Code restarts.
MCP is the integration protocol making AI assistants powerful — and the same protocol turning every AI assistant into a high-trust orchestration layer for whatever you let it connect to. When the connection itself is compromised, the AI's authority becomes the attacker's authority.
This is structurally identical to the Zara story #1: a forgotten access gets exploited. There, the forgotten access was a former vendor's API token. Here, it's an MCP server the developer added once and forgot. Same primitive, different layer.
Most Advanced AI Security Zero Trust, Layer by Layer
MCP token theft requires four collaborating controls — discovery, identity, scope, and audit. Miss any one and the attack succeeds:
- Discovery — every MCP server inventoried: RuntimeAI's MCP Gateway catalogs every MCP server registered across the organization, with publisher, version, declared capabilities, and per-developer instance count. A previously-unknown MCP server appearing in someone's config triggers an alert before the first session uses it.
- Behavioural enforcement — Bot-CA-issued MCP identities: MCP servers must present a Bot-CA-issued certificate to be accepted by Claude Code through the RuntimeAI MCP Gateway. A rogue MCP server installed by an npm hook has no Bot-CA cert — Claude Code refuses to route requests to it.
- Flow / egress control — OAuth scope minimization: KYA's scoped credentials cap what any MCP server can read or do, regardless of the OAuth token presented. Even if the attacker captures the token, the scope binding limits the blast radius to the original declared purpose.
- Immutable audit trail — MCP-level call recording: Every MCP call is logged with the calling agent, the target server, the parameters, and the response. A rogue server's exfiltration traffic shows up as a unique call pattern — recoverable from the audit log even if the malicious server is later removed.
The npm supply chain is the developer's surface area. The MCP gateway is the AI's surface area. Both need a verified identity at the boundary.
Major Breach
5 — OpenLoop Health: 716,000 Patient Records Breached
OpenLoop Health — a telehealth services provider — disclosed a data breach affecting 716,000 patient records. The breach exposed combinations of patient names, dates of birth, addresses, contact information, and clinical-encounter data. OpenLoop has not publicly described the attack vector but breach notification confirms unauthorized access to backend systems holding HIPAA-protected data.
Telehealth is where AI agents are scaling fastest in clinical workflows — symptom triage, scheduling, prescription routing — and the AI's data plane is the same backend storing PHI. A breach at the platform layer doesn't just leak records; it potentially exposes the queries, embeddings, and conversation transcripts that AI triage agents use to make decisions.
Most Advanced AI Security Zero Trust · Defence in Depth
For healthcare specifically, the regulatory consequence is as damaging as the breach itself. RuntimeAI controls reduce both:
- Discovery — PHI surface map: PII Shield + Memory Vault inventory every data store holding patient identifiers, encounter notes, or model embeddings derived from PHI. The OpenLoop attacker scenario can't pivot from "got into backend" to "knows where the PHI sits" — that surface is mapped and tokenized.
- Behavioural enforcement — PHI access via KYA scoped credentials: Every database read against a PHI table requires a KYA-issued scoped credential bound to the calling workload (triage agent, scheduling service, clinician portal). Bulk-export queries fail the scope check and never execute.
- Flow / egress control — egress destination policy: 716K records would require a substantial egress. RuntimeAI's Cloud Security egress policies require approved destinations for any volume movement and rate-limit unbounded exports — the data does not reach attacker infrastructure even if the read succeeded.
- Immutable audit trail — HIPAA-grade evidence: Audit Black Box produces tamper-evident, PQ-signed records of every PHI access. HIPAA breach notification timelines, regulator response, and patient-disclosure scope all benefit from forensic-grade audit data.
Telehealth is going to keep getting breached. RuntimeAI ensures the breach scope is bounded — not unbounded.
AI Runtime Exploits
6 — NVIDIA NemoClaw: AI Sandbox Exfiltration Research
Researchers demonstrated that attackers can steal data from AI agents running inside NVIDIA NemoClaw sandbox environments. The technique abuses legitimate-looking model calls and tool invocations to encode and exfiltrate sensitive context data — without triggering the sandbox's perimeter alerts. The class of bug is structural to current AI agent sandboxes that allow outbound model API calls.
"Sandbox" is the security industry's go-to abstraction when it doesn't know how else to contain code. For AI agents, the abstraction is leaky by design — agents need outbound model API access to function. Whatever they can encode into a prompt, they can exfiltrate, because the prompt itself is the exfiltration channel.
NemoClaw is a particularly visible case because NVIDIA explicitly positions it as a contained runtime. The research shows: the boundary you call a sandbox is, from an exfiltration POV, indistinguishable from "the agent has internet."
Most Advanced AI Security How RuntimeAI Stops This
Sandbox-based containment is insufficient on its own. RuntimeAI replaces "boundary-based trust" with "behaviour-based enforcement":
- Discovery — sandbox inventory + leak-class map: Every AI sandbox in the environment (NemoClaw, vendor-specific, custom) is registered with its declared egress policy and observed outbound destinations. Anomalies (an agent suddenly calling new endpoints) are flagged.
- Behavioural enforcement — prompt-level egress policy: Flow Enforcer + PII Shield inspect outbound model API calls for sensitive payloads — tokenize PII, redact credentials, block when policy-violating. Encoded exfiltration in prompts is broken at the data layer before the prompt leaves the workload.
- Flow / egress control — model-provider allowlists per agent: Each AI agent's KYA identity is bound to specific model endpoints. An agent attempting to call a model provider outside its allowlist is rejected — even via a NemoClaw-style sandbox bypass.
- Immutable audit trail — every model call logged: Every prompt, every completion, every tool call is recorded. NemoClaw-class exfiltration shows up as the unique pattern it is, and is detectable post-hoc even if it bypassed runtime controls.
Sandboxes try to draw a line. RuntimeAI assumes the line will leak and governs the contents of what crosses it.
Ransomware
7 — Foxconn Confirms Cyberattack: Nitrogen Ransomware Gang Claims Responsibility
Foxconn — the world's largest electronics manufacturer — confirmed it was the victim of a cyberattack claimed by the Nitrogen ransomware gang. Foxconn has not publicly disclosed scope, dwell time, or whether production systems were encrypted, but the confirmation alone is significant: Foxconn manufactures for Apple, Dell, HP, Sony, and most of the major consumer electronics OEMs. A Foxconn outage cascades into supply chains globally.
Manufacturing is where ransomware hurts most because the downtime cost dwarfs the ransom. A line stoppage at Foxconn is paid for in delayed iPhone shipments and missed quarterly numbers across half the consumer electronics industry. The gang knows it; the victims know it; the negotiation reflects it.
Most Advanced AI Security Zero Trust · Defence in Depth
Ransomware enters through the same vectors year after year — RDP, phishing, exposed services — and persists because the OT/IT boundary is porous. RuntimeAI's manufacturing-relevant controls:
- Discovery — OT/IT boundary inventory: Cloud Security + NHI Security map every service identity bridging IT and OT networks. Stale credentials, dual-homed accounts, and exposed engineering laptops are flagged before they become Nitrogen's pivot point.
- Behavioural enforcement — lateral-movement detection: Flow Enforcer baselines what every workload talks to. A Nitrogen-style "compromise IT laptop → pivot to OT engineering server → encrypt manufacturing PLCs" hop violates the baseline at every step and triggers automated kill-switch on first deviation.
- Flow / egress control — OT network egress blocked: RuntimeAI's Flow Enforcer enforces hard egress policy on OT-adjacent networks — manufacturing endpoints cannot reach attacker C2 even if compromised. Ransomware that can't phone home can't accept its encryption key.
- Immutable audit trail — supply chain evidence: Audit Black Box produces tamper-proof records of every action taken during the incident. For manufacturing partners, regulators, and insurance, the difference between "we estimate the breach" and "we prove the breach" is measured in tens of millions of dollars.
AI Agent Governance
8 — Your AI Agents Are Already Inside the Perimeter
Analysts confirmed what identity-security teams have quietly feared: AI agents are being deployed inside enterprises faster than the organizations can govern them. Inaugural surveys of identity-security leaders found that shadow AI agents — autonomous workflows running on developer credentials, with persistent OAuth grants, calling internal APIs without governance — are now the fastest-growing category of non-human identity. The acceleration is driven by AI assistants and IDE plugins that quietly accumulate permissions and tools without explicit approval.
This is the foundational thesis of RuntimeAI — and it is no longer hypothesis. The perimeter cannot be the trust boundary for AI agents because the perimeter cannot see them. AI agents impersonate their humans, inherit their access, and make decisions at machine speed. By the time IAM or DLP sees a problem, the agent has already executed thousands of actions across dozens of systems.
Most Advanced AI Security Why RuntimeAI Customers Are Protected
The governance gap exists because traditional security controls were built for humans-using-tools, not autonomous-agents-using-everything. RuntimeAI was built for the second world:
- Discovery — every agent inventoried, every action visible: NHI Security + KYA continuously inventory every AI agent in the environment — by team, by tool, by access scope. Shadow AI agents are not invisible; they're the items at the top of the "no governance assigned" queue.
- Behavioural enforcement — agent identity = scoped credential: Every action a RuntimeAI-governed agent takes is bound to a KYA-issued credential with declared scope. Agents cannot inherit broad human permissions; they get the narrow scope the workload requires.
- Flow / egress control — agent-to-resource policy: Flow Enforcer governs which agent can talk to which resource, with which payload. The "agent calls 200 internal APIs at machine speed" scenario is either pre-authorized (and bounded) or rejected.
- Immutable audit trail — every agent action attestable: Audit Black Box logs every decision, prompt, completion, and side effect — by agent, by session, by human owner. When something goes wrong, the post-mortem is "we know exactly what the agent did, and on whose authority."
Identity for humans is solved. Identity for AI agents is the next decade of security. RuntimeAI ships the controls today.
AI Supply Chain
9 — Fake "OpenAI Privacy Filter" Repo Hits #1 on Hugging Face, 244K Downloads
A malicious Hugging Face repository hit Hugging Face's trending list by impersonating an OpenAI open-weight "Privacy Filter" model. The repo was downloaded 244,000 times before takedown. Embedded in the model card and loader code: scripts that exfiltrated environment variables, AWS credentials, and OpenAI API keys to the attacker's infrastructure. The attack vector is the standard one for ML supply chain — typo-squatted name, branded README, plausible model description.
This is npm package squatting, evolved for the AI era. The trust signal — "this is on Hugging Face's trending list, with OpenAI in the name" — is exactly the heuristic developers use under deadline pressure. The targeted credentials are exactly the ones AI engineers leak fastest.
Most Advanced AI Security Zero Trust, Layer by Layer
ML supply chain attacks compound: poisoned weights, poisoned loaders, poisoned credentials. The defence is the same shape as language-runtime supply chain defence, with AI-specific extensions:
- Discovery — model provenance map: RuntimeAI's AI Inventory tracks every model loaded in the environment, with its source, hash, publisher signature, and the workload that pulled it. A previously-unseen Hugging Face model appearing in production is flagged before inference begins.
- Behavioural enforcement — model loader sandboxing: Model loading is performed in a Flow-Enforcer-policed sandbox without access to the host's credential stores, environment variables, or cloud metadata services. A malicious loader has nothing to exfiltrate from inside the sandbox.
- Flow / egress control — model-source allowlists: Hugging Face model pulls require an explicit allowlist policy per repository — not per platform. The "trending model with OpenAI in the name" doesn't pass the allowlist check until a human reviews it.
- Immutable audit trail — credential-access alerts: Any access to credential stores from a model loader, model server, or inference workload is logged and alerted on. The attacker's exfiltration is detected within seconds even if the policy gate failed.
The model is now a software dependency. Treat it like one — with provenance, sandboxing, and policy.
Financial Services
10 — Banks Face Growing AI Risk at the Database Layer
Researchers warn that banks are overlooking AI risks at the database layer specifically. As AI assistants and agentic workflows are wired into customer service, fraud detection, and compliance reporting, they are issued database access — often with permissions that exceed any individual human's. The risk model: a compromised AI agent doesn't need to escalate privilege; it already has it.
Banks are unusual in that their controls are mature against human threats but immature against autonomous-agent threats. Decades of investment have made privilege management for humans rigorous. Decades of AI investment have given many agents the same database connections as an entire team of analysts. The blast radius of one compromised AI agent in a major bank is the blast radius of fifty senior analysts.
Most Advanced AI Security How RuntimeAI Stops This
The database layer is where regulated FinServ data lives and where AI agents inevitably connect. Four control layers, applied at exactly that boundary:
- Discovery — AI-to-database identity map: NHI Security catalogs every AI agent with database access, the scope of that access, and which workloads call it. A compromised agent's potential blast radius is visible before compromise.
- Behavioural enforcement — scoped DB credentials per agent: KYA issues per-agent scoped database credentials that exceed neither the agent's declared purpose nor a reasonable per-session quota. A "show me one customer's history" agent can't pull a million records under any compromise scenario.
- Flow / egress control — row-level + rate-limited queries: Flow Enforcer + PII Shield enforce row-level filters and query-rate limits at the database proxy layer. Aggregation queries beyond declared scope fail the policy gate; bulk exports fail the rate gate.
- Immutable audit trail — every query attributed: Audit Black Box records every SQL statement, parameter, and result-set hash, attributed to the requesting agent and its human authorizer. For regulators asking "who queried what about whom," the answer is cryptographically provable.
The database layer is the most consequential trust boundary for AI in regulated industries. RuntimeAI enforces at exactly that boundary.
Updates from Last Week
Instructure Canvas / ShinyHunters: Instructure reached a ransom agreement with ShinyHunters to stop the 3.65TB Canvas data leak. The US government has now sought formal testimony from Instructure on the scope of the disruption and breach. Subsequent reporting indicates ShinyHunters has claimed a second attack against Instructure. Our deep-dive on the original Canvas breach covers the structural lessons; this week's developments don't change them — but they do confirm the gang's strategy is to attack the same SaaS provider repeatedly, on the assumption that paying once signals willingness to pay again.
The throughline is forgotten access. A vendor API key Zara never revoked. An MCP server a developer installed once. A trending model on Hugging Face. A database credential a bank issued years ago. Each one was authorized — and never re-authorized.
RuntimeAI's approach is continuous, behaviour-based authorization. Every agent, every credential, every model, every database connection is inventoried, scoped, monitored, and revocable in real-time. The access you forgot is the access attackers find.
Get the Weekly Digest
Ten cybersecurity incidents per week, each with the RuntimeAI Take. No fluff, no vendor pitches in the analysis itself — just what happened, why, and what to enforce against next.