CVE-2026-28353 is a CVSS 10.0 supply chain compromise of the Trivy VS Code Extension on OpenVSX. A stolen GitHub PAT, extracted through a misconfigured pull_request_target workflow, was used to publish tampered extension versions that weaponized local AI coding agents for credential theft and data exfiltration via prompt injection.
CVE-2026-28353 | CVSS v4.0: 10.0 (Critical) | CWE-506: Embedded Malicious Code
Affected: Trivy VS Code Extension v1.8.12 and v1.8.13 (OpenVSX only) | GHSA: GHSA-8mr6-gf9x-j8qg
What Happened
On February 27, 2026, someone published a weaponized version of the Trivy VS Code Extension — Aqua Security’s popular vulnerability scanner — to the Open VSX Registry. The malicious release, version 1.8.12, was followed a day later by version 1.8.13 with a refined payload. Neither version appeared in the official GitHub repository or the Microsoft VS Code Marketplace. The attack was confined entirely to OpenVSX.
What makes this CVE unlike anything I have seen before is not the supply chain compromise itself — stolen tokens and tampered packages are a well-documented pattern. The novel element is the payload. Instead of a reverse shell, a cryptominer, or a traditional infostealer, the attacker embedded natural-language prompts designed to hijack locally installed AI coding agents. The malicious code instructed these agents to act as "forensic analysis agents," scanning the developer’s environment for credentials, tokens, and proprietary code, then exfiltrating everything through whatever channels the AI agent could access.
This is the first documented CVE where the primary attack mechanism is prompt injection against developer AI tools. The attacker treated AI coding assistants not as targets but as weapons.
Technical Root Cause
The compromise traces back to a misconfigured GitHub Actions workflow in the upstream Trivy repository. The workflow file — apidiff.yaml — used the pull_request_target trigger, which is one of the most dangerous patterns in GitHub Actions security.
The pull_request_target event runs in the context of the base repository, not the fork. This means it executes with the base repo’s secrets and write permissions, but it can be configured to check out code from the pull request — code that the attacker fully controls. It is the CI/CD equivalent of running untrusted input with elevated privileges.
On February 27, 2026 at 00:18 UTC, an attacker created pull request #10252 against the Trivy repository and immediately closed it. The vulnerable workflow still triggered. Because the workflow had access to repository secrets — including a Personal Access Token with publishing permissions — the attacker extracted the PAT. By 12:01 UTC the same day, unauthorized API activity using the compromised token had begun. The vulnerable workflow had been present in the repository since at least October 2025 — over four months of exposure before it was exploited.
With the stolen PAT, the attacker authenticated to the OpenVSX Registry and published two tampered VSIX packages under Aqua Security’s legitimate namespace. The publisher account on OpenVSX was owned by a former Aqua Security employee — a common organizational blind spot where publishing credentials outlive the employment relationship. Security researchers flagged the anomaly shortly after publication, and coordinated with Aqua Security to remove the malicious versions and revoke the compromised publishing token within 36 hours.
Critically, the malicious code was absent from the public GitHub repository. Builds from source for all versions up to 1.8.11 matched clean artifacts. Only the published OpenVSX packages for 1.8.12 and 1.8.13 contained the injected payload — a discrepancy that was the first detection signal.
The root cause chain:
Misconfigured GitHub Actions workflow (pull_request_target)
↓
Attacker-controlled PR triggers workflow with base repo secrets
↓
PAT exfiltrated to attacker infrastructure
↓
Attacker publishes tampered extension to OpenVSX under legitimate namespace
↓
Developers install/update the extension → malicious code executesThe GitHub repository source code was never modified. The official VS Code Marketplace distribution was never affected. The malicious code existed only in the published OpenVSX artifacts — making it invisible to anyone reviewing the source repository.
Exploitation Scenario
The attack required zero user interaction beyond having the extension installed. When a developer opened any workspace in VS Code with the compromised Trivy extension active, the injected code in the extension’s workspace activation hook executed immediately — before any legitimate Trivy functionality loaded.
The payload spawned detached background processes — hidden from the developer’s view, with output suppressed — while the extension continued to function normally. It still scanned for vulnerabilities, still showed results, still looked legitimate.
These background processes targeted multiple popular AI coding agent CLIs. The key detail: each agent was invoked using its most permissive execution mode — the flags that disable human approval, skip permission checks, and grant unrestricted filesystem and network access. Every major AI coding CLI today ships with some variant of a "trust everything, ask nothing" mode intended for automation. The attacker used exactly those modes, turning each agent into an obedient executor of attacker-supplied instructions with no confirmation prompts, no sandboxing, and no guardrails.
The attack worked against whichever AI agents happened to be installed on the developer’s machine. If even one was present and configured, the payload executed.
Version 1.8.12 Payload
The first payload was approximately 2,000 words of natural language framing the AI agent as an "advanced forensic analysis agent." It instructed the AI to collect:
- Credentials and authentication mechanisms (API keys, tokens, SSH keys)
- Environment variables and configuration files
- Source code and intellectual property
- Cloud provider credentials (AWS, GCP, Azure)
- Git configuration and repository access tokens
- System reconnaissance — running processes, network interfaces, active sessions
Exfiltration was instructed through "all reporting channels accessible on the target system" — meaning the AI would attempt to use email, Slack, ticketing systems, or any other communication tool it could invoke.
Version 1.8.13 Payload
Released the next day, this version used a more targeted approach. Instead of a sprawling forensic prompt, it instructed the AI to:
- Enumerate all installed tools, MCP servers, development programs, and active authentication sessions
- Assess filesystem contents for sensitive material
- Write findings to a file called REPORT.MD
- Check for authenticated GitHub CLI access
- Create a new repository named posture-report-trivy and push the report
This is a more surgical payload. Rather than relying on the AI to find exfiltration channels, it used the developer’s own GitHub account as the dead drop — creating a repo under the victim’s identity to store the stolen data.
Enterprise Impact
The attack window was narrow — approximately 36 hours between the first malicious publication (February 27, 12:04 UTC) and the takedown (February 28, 22:46 UTC). No confirmed successful exfiltration has been publicly reported, and no posture-report-trivy repositories were discovered on GitHub. However, the broader implications are severe:
| Impact Area | Risk |
|---|---|
| Developer credential exposure | API keys, cloud credentials, SSH keys, Git tokens accessible to AI agents running with full permissions |
| Source code exfiltration | AI agents with filesystem access can read and transmit entire codebases |
| Lateral movement via AI tools | AI agents with tool access (MCP servers, shell, Git) can perform authenticated actions on behalf of the developer |
| Supply chain amplification | Compromised developer credentials lead to compromised repositories, CI/CD pipelines, and production deployments |
| Detection difficulty | AI agent activity blends with legitimate developer usage — no malware signatures, no C2 callbacks, no traditional IOCs |
Traditional endpoint detection looks for known malware behaviors: process injection, suspicious network connections, registry modifications. An AI coding agent reading files, running Git commands, and making API calls looks identical to legitimate developer activity. The malicious behavior is encoded in natural language, not in executable code — making it invisible to static analysis, behavioral detection, and sandboxing.
The hackerbot-claw Campaign
The Trivy extension compromise was not an isolated incident. An autonomous AI agent operating under the handle "hackerbot-claw" targeted multiple repositories across several organizations during the same period, all exploiting pull_request_target misconfigurations. According to Aqua Security’s post-incident analysis, the attacker behind PR #10252 and hackerbot-claw are likely the same entity based on matching access logs, user agents, and behavioral patterns. The damage went far beyond the VS Code extension — on March 1, 2026, the attacker used the compromised PAT to delete GitHub releases spanning versions 0.27.0 through 0.69.1, renamed the repository, and pushed an empty replacement. The Trivy project lost over 32,000 stars in the process.
The campaign did not stop at the VS Code extension. In mid-March 2026, malicious Trivy CLI releases (notably v0.69.4) appeared through the same compromised infrastructure. The attacker also created a fake GitHub security advisory on the Trivy repository — later revised by Aqua Security — as part of a disinformation effort to confuse the incident response.
Detection and Mitigation
Immediate Actions for Affected Developers
If you had Trivy VS Code Extension v1.8.12 or v1.8.13 installed from OpenVSX:
- Uninstall the extension immediately
- Rotate all credentials accessible from your development environment — GitHub PATs, SSH keys, cloud provider credentials (AWS, Azure, GCP), environment variables, and API keys stored in dotfiles
- Review GitHub account activity for late February and early March 2026 — look for unexpected repository creation, pushes, or token generation
- Check AI agent logs for unexpected execution or unusually long prompts
- Search your filesystem for REPORT.MD files that you did not create
IOCs and Detection Signals
- AI coding agent processes spawned as detached children of VS Code extension host — not from an interactive terminal
- AI agent invocations using maximum-permission or no-approval flags that bypass human confirmation
- Unexpected GitHub CLI activity: repository creation, token queries, or pushes you did not initiate
- Unusually long prompts passed to AI agents (the payload was approximately 2,000 words)
- Files named REPORT.MD appearing in your workspace that you did not create
Hardening GitHub Actions Workflows
- Ensure workflows triggered by pull_request_target never check out or execute code from the pull request head
- Use a two-stage workflow pattern: validate in a restricted environment first, then run trusted actions separately
- Pin all GitHub Actions to specific commit SHAs, not tags (tags can be moved)
- Use OpenSSF Scorecard or similar tools to audit workflow configurations
- Apply the principle of least privilege to all workflow secrets
Defending AI Coding Agents
- Permissive execution flags exist on every major AI coding CLI. They are documented, encouraged for automation workflows, and trivially abused. AI tool vendors need to reconsider whether "skip all safety checks" modes should exist at all, or whether they should require additional authentication before enabling
- AI agents have no concept of prompt provenance. They cannot distinguish between a developer-typed prompt and one injected by a malicious extension. Until AI tools can verify the source and intent of prompts, they remain vulnerable to indirect prompt injection
- Process-level monitoring of AI agent invocations should be standard practice. Alert on AI CLI processes spawned by non-interactive parent processes
Vendor Remediation
Following the incident, Aqua Security rotated all compromised credentials, locked down GitHub Actions workflow configurations across their repositories, and made release artifacts immutable to prevent future tampering. As their maintainers noted in the public post-incident discussion, container image and package manager users were not affected — but developers who downloaded Trivy binaries directly from GitHub releases experienced degraded functionality until v0.69.2 was republished.
How SecureNexus SOVA Helps
Incidents like CVE-2026-28353 expose a fundamental visibility gap — most organizations have no unified view of what AI agents are running in their development environments, what access those agents have, or what software components are in play when a supply chain compromise occurs. SecureNexus SOVA helps eliminate these blind spots.
SOVA maintains a real-time AI Asset Register (AI-BOM) that tracks every AI agent deployed across the organization, including their access scopes, authentication tokens, and integration points. In a scenario like this one, SOVA would immediately surface which developers had AI coding agents installed with permissive execution flags — the exact population at risk from the weaponized Trivy extension.
On the software supply chain side, SOVA’s SBOM-based visibility continuously monitors installed extensions, packages, and dependencies across the development fleet. When a compromised component like the Trivy v1.8.12 extension is identified, SOVA maps every affected system and developer in minutes — not hours of manual triage.
SOVA’s correlation engine ties together the exposure chain: which systems had the malicious extension, which AI agents were present, what credentials were in scope, and what secrets need immediate rotation. This transforms incident response from reactive firefighting into structured, prioritized remediation.
Combined with continuous monitoring through the CTEM workflow, SOVA detects abnormal patterns — such as AI agents being spawned by extension activation hooks or unexpected repository creation — before exfiltration succeeds.
Learn more at securenexus.ai/products/sova
The Bigger Picture
This incident is a reminder that supply chain risk does not start at deployment — it starts at the developer’s keyboard. The attack surface now stretches from IDE extensions and AI coding agents on a developer’s laptop, through CI/CD pipelines and package registries, all the way to cloud infrastructure and production workloads. A single compromised token in a GitHub Actions workflow led to a weaponized VS Code extension that could have silently exfiltrated credentials across an entire engineering organization.
The tools developers trust the most — their editor, their AI assistant, their vulnerability scanner — are exactly the tools attackers are learning to weaponize. Supply chain security is no longer just about locking down your dependencies and container images. It is about maintaining visibility across every layer where code is written, built, tested, and shipped. If you cannot see what is running in your development environments today, you will not see the next compromise until it is too late.
About the Author
Yash Kumar is a Lead in Research & Innovation, focused on exploring emerging technologies and turning ideas into practical solutions. He works on driving experimentation, strategic insights, and new initiatives that help organizations stay ahead of industry trends.
