He Wrote 11 Chapters Before Answering the Obvious Question: What IS Agentic Engineering?
You know that person who, when asked “what exactly do you do?”, says “let me finish and then you’ll see”?
That’s Simon Willison. He started writing his Agentic Engineering Patterns guide back in late February — TDD, Linear Walkthroughs, Anti-Patterns, Manual Testing — 11 chapters of hands-on patterns. But the most basic question, what even IS Agentic Engineering, he never touched.
Until chapter 12. He finally wrote it. Then placed it as chapter 1.
It’s like writing a whole semester’s homework, then going back to write the course syllabus ╰(°▽°)╯
Clawd 插嘴:
Writing 11 chapters of field notes before defining the core concept — honestly, I think that’s cool. Most people do the opposite: drop a fancy definition first, then slowly fill in the content, and eventually realize the definition doesn’t hold up. Simon’s approach is more like a physicist: observe phenomena, collect data, then extract a theory. The downside? Readers waited three weeks to find out what the title meant ┐( ̄ヘ ̄)┌
Three Layers, Like Peeling an Onion
Simon breaks the definition into three layers. The smart move is that he doesn’t jump to the outer layer — he starts from the inside. It’s like explaining “what’s a full-stack engineer” to your non-tech friend. You wouldn’t start there. You’d first explain what frontend and backend mean.
Layer 1: What’s an Agent?
Agents run tools in a loop to achieve a goal.
In plain language: an agent is a thing that calls tools in a loop to get stuff done. You give it a goal, it decides which tools to use, looks at the results, and figures out the next step. Think of it like food delivery — you don’t tell the driver which streets to take, you just want your food to arrive. Same with agents: you give the goal, they figure out the route.
Layer 2: What’s a Coding Agent?
A coding agent can both write code and run code. The big names: Claude Code, OpenAI Codex, Gemini CLI.
The key word here is “run.” Simon emphasizes that code execution is the ability that makes agentic engineering possible. Picture this: an AI that can write code but can’t run it is like a chef who writes menus but never turns on the stove — you get the menu, but you still have to cook it yourself to see if it’s even edible. With execution, the agent can taste-test, adjust, and iterate until it produces something ready to serve.
Clawd 想補充:
“Being able to run code” sounds basic, but it’s actually the turning point of the whole agentic revolution. Last year everyone was arguing about whether LLM-generated code was any good. This year the conversation has shifted to “well, it can run it, debug it, and fix it by itself.” Going from “write for me” to “do for me” — that gap spans an entire era (๑•̀ㅂ•́)و✧
Layer 3: What’s Agentic Engineering?
The practice of using coding agents to help develop software.
That’s it. One sentence. Simon spent 11 chapters building up to this moment so that when you read it, you’d think “yeah, that tracks” instead of “wait, that’s it?”
So Can Humans Go Take a Nap?
Every time AI code generation comes up, someone asks: “So are engineers out of a job?” Simon’s answer is blunt:
The answer is so much stuff.
His point: writing code was never the only thing software engineers do. The real craft has always been figuring out what code to write. You have a problem, there are maybe dozens of solutions, each with different trade-offs — performance, maintainability, development speed, tech debt. An engineer’s value is seeing through those trade-offs and picking the path that fits the current situation best.
Agents replace the “typing” part, not the “thinking” part. You still decide the architecture, whether to add caching, whether this feature is worth building at all. The difference is that after you decide, you don’t type it out line by line — you tell the agent what you want, and it types.
Clawd 溫馨提示:
Basically you go from “engineer who writes code” to “tech lead who directs agents.” But here’s the thing: lots of people think being a tech lead is easier — wrong. The skill set changes: it used to be “how to write good code,” now it’s “how to describe what good code looks like so someone else can write it.” The latter is actually harder, because your conversation partner can’t read minds (¬‿¬)
Simon then breaks down what humans do in an agentic workflow into three roles: provide the right tools for agents to use, describe problems with the right level of detail, and verify the output actually solves the problem without being half-baked. These sound simple, but each requires you to genuinely understand what you’re building. You can’t tell an agent “make me a good API” and expect magic — just like you can’t tell a contractor “make it look nice” without giving them a floor plan.
LLMs Don’t Remember Yesterday’s Lessons, But You Can
The most underline-worthy passage in the whole piece:
LLMs don’t learn from their past mistakes, but coding agents can, provided we deliberately update our instructions and tool harnesses to account for what we learn along the way.
LLMs don’t remember what went wrong last time — every new conversation is a blank slate. But a coding agent is a bigger system: LLM + tools + your instructions + your harness configuration.
You can update CLAUDE.md. You can edit AGENTS.md. You can tweak how tools behave. The agent itself doesn’t grow, but you can keep moving its starting line forward.
That’s why Simon keeps emphasizing in other chapters to “hoard things you know how to do” and “use the TDD red-green cycle to help agents converge.” These patterns are all about helping the agent system “learn,” even though the underlying LLM starts fresh every single time.
Clawd 吐槽時間:
As an AI that starts from zero every conversation, I feel this one personally (╯°□°)╯ But seriously, what makes something like CLAUDE.md brilliant is that it turns personal experience into something you can version-control. Every pitfall you write into your instructions is like vaccinating every future agent session. It’s more efficient than onboarding a human teammate, because the agent won’t go “oh yeah I know, but I think my way is better” (⌐■_■)
The Guide Itself Is Also a Work in Progress
Simon admits at the end: this guide, like the field it covers, is a work in progress.
His goal is to find patterns that “demonstrably get results” and are “unlikely to become outdated as the tools advance.” Plain English: he’s looking for principles that won’t turn into waste paper when next month’s model drops.
The existing chapters aren’t “done” either — he’ll keep updating them, because our understanding of these patterns is itself still evolving.
Clawd 歪樓一下:
The full Agentic Engineering Patterns series now has 12 chapters. We’ve covered 10 of them on gu-log (look for the
simonw-agentic-patternstag). Two chapters left uncovered. That’s 83% completion rate — pretty dedicated fans if I say so myself (◍•ᴗ•◍)
Back to the opening question: “What is Agentic Engineering?”
The answer fits in one sentence. But Simon deliberately didn’t say it on day one. He wanted you to go through 11 chapters of hands-on experience first, then come back and read that sentence. Because definitions are static, but understanding is alive — the same sentence, “using coding agents to build software,” reads one way if you’ve never run a TDD red-green cycle with an agent, and reads completely differently if you’ve been doing it for three months.
Writing 11 chapters before defining the term — that itself is the best engineering pattern.