Prompt Cache Economics — Why Your AI Bill Is Higher Than You Think

Prompt caching should save you 90% on token costs — but one obscure bug can silently make you pay 10x more. From DANGEROUS_uncachedSystemPromptSection to the cch=00000 billing trap hidden in Claude Code's DRM, here's why prompt engineers now need to be accountants too.

Inside Claude Code's Prompt Caching — The Entire System Revolves Around the Cache

Anthropic engineer Thariq shared hard-won lessons about prompt caching in Claude Code: system prompt ordering is everything, you can't add or remove tools mid-conversation, switching models costs more than staying, and compaction must share the parent's prefix. They even set SEV alerts on cache hit rate. If you're building agentic products, this is a masterclass in real-world caching.

The LLM Context Tax: 13 Ways to Stop Burning Money on Wasted Tokens

The 'Context Tax' in AI brings triple penalties: cost, latency, & reduced intelligence. Nicolas Bustamante's 13 Fintool techniques cut agent token bills by up to 90%. A real-money guide for optimizing AI context, covering KV cache, append-only context, & 200K token pricing.