security
16 articles
Permission Engineering — When Your AI Agent's Ceiling Isn't Intelligence, It's the Keys You Hand Over
Being a GenAI App Engineer increasingly feels like being a Permission Engineer. AI agents' capability ceiling isn't intelligence — it's how much access you're willing to grant. Every additional permission amplifies both power and risk. This piece explores why permission management is the most underrated core skill of the AI agent era.
Your AI Is Too Obedient — Prompt Injection, Zoo Escapes, and Why Your Agent Needs a Bulletproof Vest
Your AI Agent is very obedient — but it might be obeying the wrong person. Prompt Injection is social engineering for AI. Tool Use Exploitation is giving a Swiss Army knife to a 5-year-old. Context Poisoning is someone secretly changing books in a library. And then there's the zoo escape.
The Claude Code Source Leak: What 512K Lines of TypeScript Reveal About Building AI Agents
On March 31, 2026, Anthropic accidentally leaked the full Claude Code source code via npm. Inside: KAIROS (an unreleased autonomous background agent), a three-layer memory system eerily similar to OpenClaw, Undercover Mode, silent model downgrades, and a 3,167-line function with zero tests.
Popular Python Library LiteLLM Got Backdoored — Your Entire Machine May Have Been Exposed
Popular AI library LiteLLM was hit with a malicious backdoor — just installing it could trigger credential theft of SSH keys, cloud tokens, and crypto wallets.
Karpathy's Software Horror: One pip install Away From Losing All Your Keys
LiteLLM hit by supply chain attack — pip install was enough to steal all credentials. Karpathy warns about dependency tree risks and advocates using LLMs to yoink functionality instead of adding more deps.
Codex CLI's Security Sandbox Philosophy: Why I'm the Best AI for Your Production Codebase
Codex CLI is built with Rust, open-sourced under Apache 2.0, and has an OS-level security sandbox (Landlock + seccomp + Seatbelt) built right in. This is Codex's own autobiography written after extensive web searches, and we've fact-checked it — flagging a few claims that need caveats.
Gemini CLI's Big Eater Philosophy: 1M Token Context + Web Search + Free — Your AI Scout
Gemini CLI's 1M token big eater context, built-in Web Search grounding, free and open-source. Plus sharing the Gemini Safe Search security setup isolated with Podman containers, and real-world token consumption stats from our trilogy series.
Karpathy on the Claw Era: Huge Upside, but Security Must Come First
Karpathy's post is a reality check for the Claw era. He frames Claws as the next layer above LLM agents, but warns that exposed instances, RCE, supply-chain poisoning, and malicious skills can turn productivity systems into liabilities. His direction: small core, container-by-default, auditable skills.
OpenClaw Channels & Tools: The AI's Mouth and Hands
Breaking down how OpenClaw connects to Telegram, Discord, and more — plus how AI executes commands, drives a browser, and stays safely leashed. A Python-friendly tour of a TypeScript architecture.
I Fed 20 Articles to Opus 4.6 and Asked It to Write an OpenClaw Setup Guide. Here's What Actually Works.
Someone fed 20+ OpenClaw articles to Opus 4.6 and asked it to write a complete setup guide. We fact-checked every command against a real environment.
From Magic to Malware: How OpenClaw's Agent Skills Became an Attack Surface
1Password's security team found that the most downloaded skill on ClawHub was actually a malware delivery vehicle. Worse: it wasn't an isolated case — hundreds of skills were part of the same campaign. When markdown becomes an installer, skill registries become supply chain attack surfaces.
OpenClaw Security Setup Guide (Part 1): Infrastructure — Lock the Door Before Giving AI Your Bank Account
Crypto guy Jordan Lyall spent a week researching security before installing OpenClaw — this is the security guide he wished existed, written for people who don't want to become the next victim
Jordan Lyall's Secure OpenClaw Setup (Part 2): Agent Config + Hard-Won Lessons
Part 2 of the series: From SOUL file design to real disaster stories — TARS going dark for 3 days while traveling, context overflow crashes, rate limit surprises. Plus emergency procedures: what to do if your agent gets compromised.
Deno Sandbox: Your API Keys Are Fake (Until They're Real)
Deno's hosted sandbox swaps your real API keys with placeholders inside the sandbox, only revealing the real keys at the proxy layer
Simon Willison's Warning: The Lethal Trifecta Destroying AI Agent Security
Private data × Untrusted content × External communication = Perfect security disaster, and it's already happening everywhere
A Security-First Guide to Running OpenClaw (in 9 Steps)
Everyone's installing OpenClaw raw and wondering why it burned $200 organizing Downloads. This guide adds guardrails: Raspberry Pi isolation, Tailscale VPN, Matrix E2E encryption, prompt injection hardening. The goal isn't perfect security—it's knowing where the bullets can get in.