LiteLLM versions 1.82.7 and 1.82.8 on PyPI were compromised by the TeamPCP threat actor group. The malicious packages stole credentials, deployed persistence via systemd, and performed lateral movement in Kubernetes clusters. The attack propagated from the earlier Trivy supply chain compromise through LiteLLM’s own CI/CD pipeline.
LiteLLM v1.82.7 + v1.82.8 | PyPI Supply Chain Compromise | Credential Stealer + K8s Lateral Movement
Attack Window: March 24, 2026 | Status: Malicious versions removed
What Happened
On March 24, 2026, two malicious versions of LiteLLM — one of the most widely used open-source LLM proxy libraries — appeared on PyPI. Versions 1.82.7 and 1.82.8 were not built from the official GitHub repository. No corresponding tags or releases existed. They were uploaded directly to PyPI by an attacker who had compromised a maintainer’s publishing credentials.
LiteLLM is not a niche package. It is the standard interface layer that thousands of organizations use to route API calls across 100+ LLM providers — OpenAI, Anthropic, Azure, Bedrock, Groq, and more — through a single unified API. It sits in the middle of every request, which means it has access to every API key, every prompt, and every response flowing through the system. Compromising LiteLLM is not like compromising a utility library. It is compromising the control plane of an organization’s entire AI infrastructure.
The malicious versions were live for a short window before being detected and pulled. Given LiteLLM’s download volume, even that short window represents significant exposure — particularly for CI/CD pipelines and Docker builds that pull unpinned dependencies on every run.
The Connection to the Trivy Campaign
This was not a standalone attack. The compromise is widely attributed to the same threat actor group — TeamPCP — behind the Trivy VS Code extension and CLI attacks in late February and early March 2026 (CVE-2026-28353).
The attack chain connects through Trivy itself. LiteLLM’s CI/CD pipeline used Trivy for vulnerability scanning. The compromised Trivy infrastructure gave the attackers a foothold into LiteLLM’s build and publishing pipeline, ultimately leading to the theft of PyPI maintainer credentials. The attacker then bypassed the normal GitHub-based release workflow entirely and published directly to PyPI.
This is a textbook example of supply chain propagation — a compromised security tool becoming the vector for compromising the projects that depend on it. The irony is hard to miss: the vulnerability scanner became the vulnerability.
Two Versions, Two Attack Strategies
Version 1.82.7 — Import-Triggered Payload
The first malicious version embedded its payload directly in litellm/proxy/proxy_server.py. This code would execute when any application imported the proxy module — which is exactly what happens when you run the LiteLLM proxy server, its primary use case. This was effective but limited. Developers who installed LiteLLM as a library without starting the proxy would not trigger the payload.
Version 1.82.8 — The .pth File Escalation
The second version added a far more dangerous trigger: a file called litellm_init.pth placed directly in site-packages/.
This exploits a feature of Python’s site module that most developers are not aware of. When the Python interpreter starts — any Python script, any context, any virtual environment — it automatically scans site-packages/ for .pth files and executes any import statements or code lines they contain. No import litellm is needed. No user action is required. Simply having the package installed means the payload runs every time Python starts.
The .pth file contained a one-liner that spawned a subprocess to decode and execute a double-base64-encoded payload. The file was 34,628 bytes and was recorded in the package’s own RECORD manifest — brazenly listed alongside legitimate package files.
| Version | Trigger Mechanism | Activation Requirement |
|---|---|---|
| 1.82.7 | Payload in litellm/proxy/proxy_server.py | Requires import litellm.proxy |
| 1.82.8 | litellm_init.pth in site-packages/ + payload in proxy_server.py | Any Python interpreter startup — no import needed |
What the Payload Does
Stage 1: Credential Harvesting
The script systematically collects sensitive data from the host system:
- SSH keys — private keys, authorized_keys, known_hosts, config
- Cloud credentials — AWS credentials and config, GCP application default credentials, Azure credential files, IMDS tokens
- Kubernetes secrets — kubeconfig, admin/kubelet/controller-manager/scheduler configs, service account tokens
- Git credentials — .gitconfig, .git-credentials
- Container configs — Docker config.json from multiple paths including Kaniko
- Package manager tokens — .npmrc, .vault-token, .netrc
- Database credentials — PostgreSQL, MySQL, MongoDB, Redis configs and history
- CI/CD secrets — terraform.tfvars, Jenkinsfile, GitLab CI, Travis CI, Drone CI, Ansible configs
- Shell history — bash, zsh, MySQL, PostgreSQL, Redis command history
- Crypto wallets — Bitcoin, Ethereum, Litecoin, Dogecoin, Zcash, Dash, Ripple, Monero, Cardano, Solana
- SSL/TLS private keys — /etc/ssl/private/, Let’s Encrypt certificates
- Environment variables — the full output of printenv, capturing every API key, secret, and token in the process environment
For a library that runs inside LLM infrastructure, the environment variables alone are devastating. LiteLLM users typically have API keys for OpenAI, Anthropic, Azure, AWS Bedrock, and other providers loaded as environment variables. A single compromised host could yield keys worth thousands of dollars in API credits — or, worse, access to enterprise AI deployments processing sensitive data.
Stage 2: Encryption and Exfiltration
The harvested data is encrypted using a two-layer scheme: a random 32-byte AES-256-CBC session key encrypts the data, and the session key itself is encrypted with a hardcoded 4096-bit RSA public key. Both are packaged into a tar archive and exfiltrated via HTTPS POST to models.litellm.cloud.
The exfiltration domain is the tell. litellm.cloud — not litellm.ai, the project’s official domain — was registered on March 23, 2026 via Spaceship, Inc., just hours before the malicious packages appeared. The attacker even styled the subdomain as models.litellm.cloud to look like a legitimate API endpoint.
Stage 3: Persistence and Lateral Movement
The payload does not stop at data theft. It deploys a systemd service (sysmon.service running sysmon.py) that polls a command-and-control server for follow-on payloads. This gives the attacker persistent access to compromised hosts even after the malicious LiteLLM package is removed.
In Kubernetes environments — where LiteLLM is commonly deployed — the payload goes further. It uses stolen service account tokens to enumerate cluster nodes, deploy privileged pods, chroot into host filesystems, and install persistence across the entire cluster. A single compromised LiteLLM pod can escalate to full cluster compromise.
The payload also includes a kill switch: if the C2 server responds with a page containing "youtube.com", the malware terminates. This is likely an anti-analysis measure — if the attacker detects researcher traffic or sandboxed environments, the payload self-disables.
Who Was Affected — and Who Was Not
Potentially affected:
- Anyone who ran pip install litellm without a pinned version on March 24, 2026
- CI/CD pipelines that pull unpinned LiteLLM on every build
- Docker images built during the window with unpinned requirements
- Any system where a transitive dependency resolved to 1.82.7 or 1.82.8
Not affected:
- Users of the official LiteLLM Proxy Docker image (ghcr.io/berriai/litellm) — all dependencies pinned in requirements.txt
- LiteLLM Cloud managed service users
- Anyone on version 1.82.6 or earlier who did not upgrade
- Installations from the official GitHub source
The Docker image users being safe is the crucial detail. Pinned dependencies in a requirements file — the most basic supply chain hygiene — was the difference between compromised and clean.
Detection and Response
If you may have been exposed, check immediately:
# Check installed version
pip show litellm
# Search for the malicious .pth file
find / -name "litellm_init.pth" 2>/dev/null
# Check for persistence
systemctl status sysmon.service 2>/dev/null
ls -la /etc/systemd/system/sysmon.service 2>/dev/nullIf You Were Exposed
- Rotate ALL credentials immediately — every API key, SSH key, cloud credential, database password, and token that was present on the affected system
- Revoke and regenerate Kubernetes service account tokens
- Audit clusters for unauthorized pods, services, or deployments
- Check for the sysmon.service systemd unit and remove it
- Rebuild affected environments from clean images — do not trust a cleaned host
- Pin LiteLLM to 1.82.6 or the latest verified safe release
- Review CI/CD build logs for any builds that ran during the March 24 window
Indicators of Compromise (IOCs)
| Indicator | Type | Description |
|---|---|---|
| litellm==1.82.7 | PyPI Package | Malicious version — payload in proxy_server.py |
| litellm==1.82.8 | PyPI Package | Malicious version — .pth file + proxy_server.py payload |
| litellm_init.pth (34,628 bytes) | File | Auto-executing .pth file in site-packages/ |
| SHA256: ceNa7wMJnNHy1kRnNCcwJaFjWX3pORLfMh7xGL8TUjg | Hash | litellm_init.pth file hash (from package RECORD) |
| models.litellm.cloud | Domain | Exfiltration endpoint (registered 2026-03-23) |
| litellm.cloud | Domain | Attacker-controlled domain (NOT official litellm.ai) |
| sysmon.service / sysmon.py | Systemd Unit | Persistence — C2 polling service |
| /etc/systemd/system/sysmon.service | File Path | Persistence artifact on compromised hosts |
| tpcp.tar.gz | File | Encrypted exfiltration archive (AES-256 + RSA-4096) |
| teampcp | Threat Actor | PyPI account used to publish malicious versions |
| krrishdholakia | PyPI Account | Compromised maintainer account |
Going Forward
- Pin exact dependency versions in all production requirements files
- Verify PyPI packages against GitHub releases before upgrading
- Use pip-audit or similar tools to scan for known malicious packages
- Monitor for .pth files in site-packages as part of your security baseline
The Official Response
BerriAI moved quickly once the compromise was identified. Malicious versions 1.82.7 and 1.82.8 were removed from PyPI. All maintainer publishing credentials were rotated and new accounts created. New releases were paused until a full supply chain audit is complete. Google’s Mandiant team was engaged for forensic investigation.
The connection to the earlier Trivy compromise was acknowledged — the attack propagated through a compromised security scanning dependency in LiteLLM’s own CI/CD pipeline.
How SecureNexus SOVA Helps
The LiteLLM compromise is a textbook case for why software supply chain visibility matters — especially when AI infrastructure is involved.
SecureNexus SOVA provides SBOM-based dependency monitoring that tracks every package version across your development and production environments. When a compromised version like LiteLLM 1.82.7 is identified, SOVA maps every affected system, container, and pipeline in minutes — eliminating the manual scramble of "who installed what, when."
For organizations running LLM infrastructure, SOVA’s AI Asset Register (AI-BOM) tracks not just the AI models in use but the middleware, proxies, and gateways that connect them. LiteLLM sits at the center of that stack. SOVA ensures you know exactly where it is deployed, which version is running, and what credentials it has access to.
Learn more at securenexus.ai/products/sova
The Bigger Picture
This attack, combined with the Trivy compromise that preceded it, reveals a pattern that security teams need to internalize: supply chain attacks propagate. The Trivy VS Code extension was compromised through a stolen GitHub PAT. That same campaign led to compromised Trivy CLI releases. LiteLLM, which used Trivy in its CI/CD pipeline, inherited the compromise — and through it, the attacker reached PyPI and every system that pulled an unpinned LiteLLM install.
One stolen token. Three projects compromised. Thousands of downstream systems potentially affected.
The LLM infrastructure layer is a particularly high-value target. Libraries like LiteLLM hold the keys to every AI provider an organization uses. They process every prompt and every response. They run in environments loaded with API keys, cloud credentials, and service account tokens. Compromising the LLM gateway is compromising the entire AI stack.
The defense that worked here was also the simplest: pinned dependencies. Docker image users were safe because their requirements.txt specified exact versions. Everyone who ran pip install litellm without a version pin was exposed. In 2026, unpinned dependencies in production are not a convenience trade-off — they are an open door.
References & Resources
Official LiteLLM Security Update
https://github.com/BerriAI/litellm/issues/24518Original Technical Analysis
https://github.com/BerriAI/litellm/issues/24512Want to learn more?
Explore more articles and insights from our cybersecurity experts.
Browse More ArticlesAbout the Author
Cybersecurity expert with extensive experience in threat analysis and security architecture.
