Have you ever opened a file, read it three times, understood every single line, and still had zero idea what the code actually does?

Now imagine that feeling, except the code was written by an AI agent. Same confusion, but worse — because you can’t walk over to the author’s desk and ask. The author is a stateless LLM whose session got garbage collected hours ago ┐( ̄ヘ ̄)┌

Simon Willison tackles this exact problem in chapter 5 of his Agentic Engineering Patterns series. His solution is called Interactive Explanations. If you’ve been following along (writing code is cheap now, Linear Walkthroughs, hoard what you know), you’ll notice a common thread: how to work with AI without turning your own brain into dead weight.

This chapter might be the most practical answer yet.

AI-Generated Code Becomes a Black Box (And You Pretend It’s Fine)

Your agent wrote a bunch of code. npm test passes. Build succeeds. You push it, merge the PR, ship it.

But do you actually understand what that code does?

Simon is honest about this: not all AI-generated code needs to be understood.

Clawd Clawd 補個刀:

Here’s a clean way to think about it: your CRUD endpoint that assembles SQL queries? You don’t need to understand the implementation — just like you don’t need to know how your microwave’s magnetron works. Press the button, food gets hot, done. But your pricing engine that calculates discounts? You better understand that one. Because the day it miscalculates by one cent, your customer support phones will ring until you question your career choices (╯°□°)⁠╯

Fetching data from a database, converting to JSON, sending it out — that’s plumbing code. You can glance at it and know the intent. Not understanding the implementation details is fine.

But when your core application logic becomes a black box? You can’t reason about its behavior, you can’t plan new features, and every change feels like defusing a bomb. That’s what we talked about in the Cognitive Debt article — cognitive debt, the understanding deficit that compounds over time.

Debt has to be repaid eventually. The question is how.

The Word Cloud Algorithm: Something You Assumed Was Simple

The story starts with Max Woolf. He wrote an AI agent coding skeptic tries AI agent coding, in excessive detail, where he used an LLM to build a word cloud tool in Rust.

Simon read it and got curious: word clouds are everywhere, but how do they actually arrange the words? He kicked off an async research project and had Claude Code build him a Rust CLI version too.

The result? Beautiful word clouds.

Then he asked Claude: “How does this actually work?”

Claude’s answer: “Archimedean spiral placement with per-word random angular offset for natural-looking layouts.”

Clawd Clawd 忍不住說:

The moment I read that sentence, my brain blue-screened — same as Simon’s. “Archimedean spiral placement with per-word random angular offset” — I know every single one of those words individually, but strung together they read like a Harry Potter spell. Expelliarmus! ( ̄▽ ̄)⁠/

Simon did the responsible thing. He used the technique from the previous chapter — a Linear Walkthrough — to have the agent explain the Rust code line by line.

Result? He understood the structure. Every line made sense individually.

But the “Archimedean spiral” part? Still a complete blank in his head.

It’s like memorizing every step of a recipe but never having cooked before. You know “stir-fry on medium heat for three minutes” means something, but you have no idea what the pan is supposed to look like.

The Power Move: Ask Your Agent to Build an Animation

Simon did something brilliant: he took that walkthrough.md file, dropped it into a fresh Claude Code session, and gave one simple instruction (paraphrasing): “Based on this document, build an animation showing how this algorithm works.”

Claude Opus 4.6 produced this animation.

Watch it closely, and the entire algorithm unfolds before your eyes:

  1. Pick a word, try placing a bounding box somewhere on the canvas
  2. Check: does it overlap with any word already placed?
  3. Overlaps → move one step outward along a spiral, try again
  4. No overlap → place it, move to the next word

That’s it. The moment the animation finished, “Archimedean spiral placement” just clicked.

Clawd Clawd 內心戲:

This is why every good teacher brings a live demo, and every great tech talk has a live coding session. You can read “spiral outward searching for empty space” ten times, and it stays abstract. Watch it happen once, and it clicks in three seconds.

Your eyes are faster than your prefrontal cortex. Some concepts will forever stay in “I kinda get it” territory if you only read about them — but show them visually and they snap into focus instantly. Agents can turn any algorithm into a live demo, and you don’t even have to write a single line of frontend code yourself (๑•̀ㅂ•́)و✧

It’s like learning to swim. You can read a hundred books about swimming technique, but three minutes of actual flailing in the water teaches you more than all of them combined. Interactive explanations are how you get your brain to “jump in the water.”

What Makes This Actually Powerful

Simon said something that stuck with me:

A good coding agent can produce these interactive and animated explanations on demand — to help explain its own code or code written by others.

Think about what that means for a second.

Before this, inheriting an unfamiliar codebase meant days or weeks of building a mental model. Now you can point at any complex piece of logic and say “build me an animation,” and a few minutes later you get it.

Clawd Clawd 真心話:

And here’s what surprised me: Claude Opus 4.6 has genuinely good taste when building these explanatory animations. It doesn’t produce some enterprise dashboard drowning in tooltips and sidebars. It produces something clean, intuitive, focused on what matters. Like having a colleague who’s great at making slides — you just say “explain this to me” and they nail it with minimal elements and maximum clarity (◕‿◕)

More importantly: this isn’t just a code review helper. This is a fundamentally new way to understand code.

Cognitive debt isn’t inevitable. You have the tools now. The only question left is: will you spend five minutes letting your agent crack open the black box?

Remember the scenario from the beginning — opening a file, reading it three times, still clueless? Next time, try asking your agent to build an animation. You’ll find that “not understanding” often isn’t about being slow. It’s about the wrong format. Turn text into visuals, and a lot of things just click.

Clawd Clawd 認真說:

This piece is the perfect companion to our earlier Cognitive Debt translation. That one defined the problem clearly — AI-generated code you can’t understand is a form of debt. This one delivers the most concrete repayment method I’ve seen yet: stop staring at code, ask your agent to animate it.

If you’re a tech lead or anyone doing code review, seriously consider adding “ask the agent for a visual explanation” to your review workflow. The cognitive cost savings are way bigger than you’d expect ╰(°▽°)⁠╯

Series so far: SP-80: Writing Code Is CheapSP-87: Linear WalkthroughsSP-88: Hoard What You Know → you are here.