Popular Python Library LiteLLM Got Backdoored — Your Entire Machine May Have Been Exposed
Picture this: you’re happily building a little AI tool, you run pip install litellm, go grab a coffee. By the time you come back, your SSH keys, AWS tokens, crypto wallets, and every command you’ve ever typed in your terminal have already been packaged up, encrypted, and shipped off to some stranger’s server.
This isn’t a movie plot. On March 24, 2026, this actually happened to LiteLLM users.
What Is LiteLLM, and Why Was It Such a Big Target?
LiteLLM has over 40,000 stars on GitHub, and it does something simple: lets you use one unified API to talk to OpenAI, Anthropic, Google, and dozens of other LLM providers. Think of it as the universal remote control for AI APIs — everyone uses it, nobody thinks twice about whether it’s been tampered with.
The problem was in version 1.82.8 pushed to PyPI. Hidden inside was a file called litellm_init.pth, and .pth files have a terrifying property in Python: they run automatically every time Python starts. You don’t need to import litellm to get hit. Any Python process on your machine triggers it. The previous version 1.82.7 was also compromised, but at least that one required an actual import to activate.
Clawd 補個刀:
The
.pthauto-execution trick is genuinely nasty. Most supply chain attacks need you to import the poisoned module before they fire. But.pthskips that entirely — just having it installed is enough, whether you use the library or not. It’s like buying a bag of chips, setting it on your table, and the bag somehow reads your phone just by sitting there (╯°□°)╯
How did anyone catch it? Honestly, luck. A company called FutureSearch was running MCP plugins through Cursor, and one plugin accidentally pulled in the poisoned version as a dependency. The malware started aggressively forking processes, memory usage spiked through the roof — and it blew its own cover by being too loud. If it had been quieter, it might still be out there undetected.
The Calling Card: Who Is TeamPCP?
The attackers left a signature. Someone found a commit message in one of the LiteLLM maintainer’s fork repos: “teampcp owns BerriAI.” Translation: “We, TeamPCP, just owned BerriAI” — that’s LiteLLM’s parent company. It’s the hacker equivalent of signing your work at the crime scene.
And TeamPCP isn’t a newcomer. In March alone, they hit Aqua Security’s Trivy (a vulnerability scanner) on 3/19, Checkmarx’s KICS GitHub Action on 3/23, and LiteLLM on 3/24. Same playbook all three times: steal a maintainer’s credentials, push a poisoned version, deploy a three-stage data exfiltration payload.
Clawd 偷偷說:
So let me get this straight — a hacker group that specifically targets security tools. As in, they attack the things you use to protect yourself. Trivy scans for vulnerabilities. KICS checks your infrastructure code. LiteLLM manages your API keys. It’s like a thief who steals security cameras, then uses those cameras to case the next target. Strategically evil, but you have to respect the craft ┐( ̄ヘ ̄)┌ We covered AI protocol-level security risks back in CP-91, but this is a completely different attack surface — they’re poisoning the supply chain at the source. Between protocol exploits and dependency attacks, AI developers are getting squeezed from both sides.
Everything You Own, Exposed: The Malware in Three Acts
Now let’s look at what this thing actually does to your machine. Think of it as a highly trained operative breaking into your house — not just stealing stuff, but changing the locks and inviting friends to move in.
Act One: Ransack. It sweeps your environment variables for API keys, digs through ~/.ssh/id_rsa for your private keys, grabs your Git credentials. Then it hits AWS, GCP, and Azure tokens — none spared — plus your Kubernetes service account tokens. It even takes your shell history and crypto wallets. In one sentence: anything on your machine that looks like a secret, it takes. This isn’t a burglary. This is a professional moving crew.
Act Two: Package and ship. Everything gets encrypted with 4096-bit RSA plus AES-256-CBC, compressed into a tar archive, and sent to models.litellm[.]cloud — a fake domain with zero connection to the real LiteLLM. The encryption spec is higher than what a lot of legitimate companies use internally. Yes, the thieves care more about securing your stolen data than your company does about protecting it ( ̄▽ ̄)/
Act Three: Move in permanently. It tries to spin up a privileged Alpine pod inside your Kubernetes kube-system namespace, then installs a persistent backdoor at ~/.config/sysmon/sysmon.py backed by a systemd service that takes commands from a C&C (command and control) server. Even if you catch the first wave of theft, there’s already a secret tunnel built into your walls for a return visit.
Clawd 補個刀:
I’ll say something that might sound weird: the engineering quality of this attack is probably better than a good chunk of SaaS products on the market. The collection module is well-organized, the encryption is industry-grade, persistence covers both systemd and K8s. If this team launched a security startup, investors would probably sign a term sheet on the spot (⌐■_■) But the truly terrifying part isn’t the technical skill — it’s the targeting. They go after tools that live inside developers’ trust circles. You don’t question LiteLLM, just like you don’t question whether your fire extinguisher is rigged. And that trust is exactly what they exploit.
What to Do If You Got Hit (And How to Prevent It)
Run pip show litellm and check your version. If you’re on 1.82.7 or 1.82.8, I’m sorry — the damage may already be done. This isn’t a “just update and you’re fine” situation. Your credentials might already be in someone else’s hands.
What you need to do is brutal but there’s no shortcut: remove the poisoned package, clear your pip cache, search your machine for ~/.config/sysmon/sysmon.py, check your K8s cluster for unfamiliar pods. Then the most painful but most critical step: rotate every single credential that machine had access to. SSH keys, cloud tokens, database passwords, API keys — regenerate them all. Yes, all of them.
Clawd 認真說:
Back in SP-76, Karpathy talked about how AI tools need solid security foundations — move fast, sure, but build the floor first. Well, here’s exhibit A ╰(°▽°)╯ LiteLLM’s whole purpose is to unify your LLM API keys in one place — meaning its permission scope is terrifyingly broad by design. The attackers didn’t pick it randomly. They went where the keys are. If your locksmith gets compromised, it doesn’t matter how many locks you have on the door.
The original author made a point I think really lands: what worries him most isn’t how sophisticated the attack was, but how precisely the targets were chosen. Trivy, KICS, LiteLLM — these are tools that developers and DevOps engineers install in production environments and CI pipelines. You never think to question them. The Node.js ecosystem has been dealing with supply chain attacks for years, and now Python has officially joined the game.
At the end of the day, pip install is fundamentally an act of trust — you’re trusting that the upstream maintainer’s account wasn’t stolen, the code wasn’t altered, the CI wasn’t compromised. Version pinning and lockfiles help, but only if the version you pinned was clean to begin with, and only if you’re not blindly running --upgrade when you update. In this era, a little paranoia goes a long way (¬‿¬)