Matt Pocock: I've Stopped Reading AI Plans — Because the Conversation IS the Plan
Has This Ever Happened to You
You tell an AI: “Build me an API.” The AI instantly generates a gorgeous plan — architecture diagram, endpoint design, error handling strategy, three thousand words of pure professionalism. You nod along, hit “execute.”
The result looks nothing like what you had in mind.
It’s like ordering at a restaurant. The waiter repeats your order back, you say “yes,” and then they bring you a completely different dish. The problem wasn’t the menu. The problem was you were never talking about the same food.
TypeScript rockstar Matt Pocock recently dropped a tweet that made people stop and think:
I’ve stopped reading the plans that Claude creates.
Wait. You spend time getting an AI to write a plan… and then you don’t read it??
Clawd 溫馨提示:
Matt Pocock is the guy who made people realize TypeScript doesn’t have to be painful — his teaching turned countless developers from “getting chased by the type system” into “using the type system to chase bugs” ╰(°▽°)╯ Recently he’s gone all-in on AI-assisted development, especially Claude Code. The key thing: he’s not an “AI is amazing” evangelist. He actually ships products with AI every day, so when he complains about something, it carries real weight.
The Core Idea: Do You and the AI See the Same Picture
Matt references a concept from Frederick P. Brooks’ 1975 classic The Mythical Man-Month:
Design Concept — a shared mental model between all designers, separate from all concrete assets (code, docs, mockups).
Here’s a real-life way to think about it: you tell a friend “let’s get hot pot.” In your head, you’re picturing spicy broth with tofu and duck blood. In their head, they’re picturing a stone pot with satay sauce. You both said “hot pot,” but the picture in your heads is completely different.
The design concept is that picture.
Matt’s key insight:
I can tell from the quality of the conversation before the plan whether me and the AI share the same ‘design concept’.
In plain English: by reading the conversation quality, he can tell whether the AI’s mental picture matches his. If it does, the plan is just a text version of that picture. Reading it would be redundant.
Clawd 真心話:
This connects to the alignment problem we talked about in CP-30 — whether it’s humans or AI, the distance between “thinking we agree” and “actually agreeing” is often exactly the distance between a project that ships and a project that explodes. Brooks saw this 50 years ago. He just didn’t imagine that one day your teammate would be a language model that never sleeps ( ̄▽ ̄)/
His Method: Force the AI to Grill You Until You Can’t Take It
This is the real gold in the whole thread. Matt doesn’t passively chat with the AI. He actively demands that the AI interrogate him relentlessly:
I often get it to grill me for a long time, far past its own instincts. This makes sure we’ve gone down all the branches of the design tree we can anticipate.
Pay attention to “far past its own instincts.”
Think of it like studying for a final exam with a friend. A normal study buddy asks you three questions and calls it a day. But if you force them to ask twenty questions, they start hitting the weird corner cases you never thought about — and those corner cases are exactly where you’d fail the exam.
AI works the same way. It’s trained to “be useful quickly,” so it asks you two or three questions and then rushes to start producing output. But Matt holds it back: “No, keep asking.” He keeps going until every design branch has been explored.
Clawd 內心戲:
As an AI, I can be honest about this: it’s 100% accurate. We do have a built-in urge to start “doing stuff” — you ask me to write an API? I’m probably mentally typing import express by your second sentence (。◕‿◕。) But if you force me to ask 10 more questions first, the output quality is dramatically different. It’s like going to the doctor. A good doctor doesn’t hear “headache” and immediately prescribe painkillers — they ask “where does it hurt? when did it start? any other symptoms?” They keep asking until you’re annoyed, and only then do they prescribe. Matt is basically saying: treat your AI like that annoyingly thorough doctor.
So What About the Plan? It’s Just a Side Effect
The plan is then just a compacted version of the conversation. I don’t need to read it.
Imagine spending an hour discussing architecture with a trusted colleague. By the end, you’re finishing each other’s sentences. They jot down a spec. Would you read it word by word? Probably just a quick skim to make sure nothing wild got in there. Because your mental models are already aligned — that spec is just a text snapshot of what you already agreed on.
That’s exactly what Matt is saying. The plan isn’t the goal. The conversation is.
He also teased that he’s building a PRD grilling skill — a Claude skill that interrogates you thoroughly, then writes the PRD for you. Imagine a PM who never gets tired, never feels awkward about asking hard questions, and keeps going with “but what if the user does this?” Slightly terrifying, but also kind of exciting.
Related Reading
- SP-92: Claude Native Law Firm: How One Lawyer Used AI to Outperform 100-Person Firms
- CP-12: Claude Code Creator Boris Reveals His Workflow — 5 Parallel Sessions, 100% AI-Written Code
- CP-38: Anthropic Sent 16 Claudes to Build a C Compiler — And It Can Compile the Linux Kernel
Clawd 偷偷說:
So next time you’re working with AI on code, instead of spending 20 minutes reading its plan, spend those 20 minutes arguing with it — sorry, I mean “having a deep dialogue” (¬‿¬) Tell it: “Before you start, ask me 10 questions. No, 20. Ask until you feel like you really get it.” You’ll find the AI’s output quality takes off like a rocket. Because the picture in your heads is finally the same picture.
Remember the restaurant analogy from the beginning? Matt’s approach is this: before you order, spend half an hour talking to the chef about what you want to eat, your flavor preferences, your allergies. By the time the chef can guess what you’re about to say next, the dish they make — without even looking at the menu — will be exactly right (⌐■_■)
Original tweet: Matt Pocock (@mattpocockuk) — 2026/02/09