shroomdog-original
6 articles
AI Agent Memory Architecture: The One Thing Claude Code's Source Code Taught Me
Every new session, your AI agent forgets everything. Claude Code's leaked source hid a three-layer memory architecture and a design principle — 'Memory is hint, not truth' — that changes how you think about building agents. Here's the full breakdown.
5 Bad Design Patterns from the Claude Code Source Leak
The Claude Code source leak had everyone excited about KAIROS and model codenames. But the same codebase had a 3,167-line function, zero tests, silent model downgrades, and regex emotion detection. These aren't just Anthropic's mistakes — they're AI-generated code's default failure modes.
Prompt Cache Economics — Why Your AI Bill Is Higher Than You Think
Prompt caching should save you 90% on token costs — but one obscure bug can silently make you pay 10x more. From DANGEROUS_uncachedSystemPromptSection to the cch=00000 billing trap hidden in Claude Code's DRM, here's why prompt engineers now need to be accountants too.
The AI Agent Initiative Problem — When Should an Agent Act on Its Own?
You spent months building a powerful AI agent. It just sits there waiting for you to say something. That's not a technical problem — it's a design philosophy problem. From KAIROS's Heartbeat Pattern to OpenClaw's background sessions, this is about when to let your agent decide to act on its own.
Undercover Mode Asked a Question Nobody Wants to Answer
Hidden inside Claude Code's leaked source was a ~90-line file called undercover.ts — designed to make AI commits look like human commits. This surfaces a question the industry hasn't agreed on: when AI writes your code, should anyone know?
Can AI Test Itself? — From Claude Code's Zero Tests to Self-Testing Agents
Claude Code: 512K lines of TypeScript, 64K lines of production code, zero tests. But the more interesting question isn't why Anthropic skipped tests — it's why they didn't use their own AI coding tool to write them. Static analysis, MITM proxies, cross-model testing, and the philosophical trap of asking the same brain to write the exam and grade it.