AI Makes Coding Faster — So Why Are People Saying Engineers Are Doomed?
Picture this. Someone tells you: the cash register at a convenience store just got a 10x speed upgrade. At the same time, more customers are walking in every day. And the people who are best at operating the new registers? The same cashiers who were already there.
Then they hit you with: “So the cashiers are all getting fired.”
Wait, what? (╯°□°)╯
That is basically what Dan McAteer pointed out in a recent tweet on X. He lined up a bunch of premises that all sound reasonable on their own, then asked one simple question: how does your conclusion follow from any of this?
Three Premises That All Make Sense
McAteer’s tweet is structured like a logic exercise. He puts three cards on the table, with a tone that says: “Okay, let’s make sure we agree on these first.”
Card one: Software engineering is being automated by AI. More and more coding work can be done by machines.
Card two: Demand for software is still growing. The entire digital economy runs on software, and the digital sector is the fastest-growing part of the economy. The pie is getting bigger.
Card three: The people who can architect and orchestrate software at 1000x speed are trained software engineers. Not just anyone off the street.
Clawd 忍不住說:
Line these up and the logic practically writes itself: tools got stronger, the market got bigger, and the people best equipped to use these tools are… you. In any other industry, this would be called “good news.” It is like telling a pastry chef they are doomed because someone invented a better mixer. The chef would probably just laugh at you ┐( ̄ヘ ̄)┌
But Somehow the Conclusion Is “Engineers Are Screwed”?
This is where the tweet gets spicy.
McAteer is not saying “engineers will definitely get richer” or “AI won’t change anything.” What he is doing is more subtle — he is asking: where is your reasoning?
All three premises point toward “engineers now have more leverage.” And yet a bunch of people jump straight to “software engineers are screwed and will be poor forever.” The middle part of the argument? Evaporated. Auto-deleted by AI, apparently.
His exact words: how does that logic add up?
In plain English: show me your work.
It is like reading a classmate’s exam and their answer says “therefore Taiwan’s population will be zero next year.” You flip back to their reasoning and realize about seven pages are missing ( ̄▽ ̄)/
Clawd 偷偷說:
I really like this style of argument. He did not say “you are wrong.” He said “your A to B to C reasoning is missing the B part — can you fill that in?” That is way more productive than just fighting about conclusions. Debugging an argument is like debugging code — first check if the input and output actually connect, then worry about the logic in between. Some doom posts on Twitter have an input of “AI writes code faster” and an output of “everyone loses their job,” with the function body completely empty (¬‿¬)
It Is Not Just About Typing Speed
There is a detail in the tweet that is easy to miss. McAteer did not say “write code faster.” He said “architect and orchestrate.”
That distinction matters a lot. And he picked those two words on purpose.
If AI only makes you type faster, then sure, typing speed gets replaced and engineers become glorified Tab-key pressers. But architecting and orchestrating are a completely different level — that is system design, requirement decomposition, technical decision-making, stitching together the entire software lifecycle. You do not get that from a single prompt.
Think of it like self-driving cars. The car can handle the gas pedal now. But deciding where to go, which route to take, and whether to stop for fuel — that is still you in the driver’s seat. The pedal got automated. The destination did not.
Clawd murmur:
Karpathy made a similar point recently (check out SP-85 if you missed it): deep technical expertise is now a multiplier, not a burden. The stronger the tools get, the wider the gap between someone who deeply understands systems and someone who does not. Give a beginner and a ten-year veteran the same power drill — the houses they build will still be worlds apart. The difference is not the drill. It is the blueprint in their head (๑•̀ㅂ•́)و✧
The Real Problem With the Doom Take
What McAteer’s tweet pokes at is a very common logical slide:
“Tools get stronger” → “Operators lose all their value”
Sounds intuitive, right? Go ask history about it. History will laugh.
When Excel first came out, people said accountants were done for. Why would anyone need you? What actually happened? Good accountants grabbed Excel and started building thirty-tab monsters with cross-references everywhere, running analyses that nobody could have dreamed of by hand. The tool did not kill accountants. The tool ripped the ceiling off.
When Photoshop launched, people said designers should start looking for new careers. What actually happened? Good designers used Photoshop to create things that would have made the previous generation’s jaws hit the floor. The tool did not make designers cheap. The tool upgraded good designers from “pretty nice” to “impossible to ignore.”
This pattern keeps repeating: tool revolutions do not kill expertise. They kill the space where people without real expertise could pretend they had it.
Of course, “zero impact” is not realistic either. Every tool revolution does phase out people who only knew how to do the most basic operations — but that is pruning, not clear-cutting. Claiming “everyone is doomed” versus “there will be adjustments” are wildly different claims that require wildly different evidence.
Clawd 真心話:
Every time someone says “X profession will be replaced by AI,” I want to ask one question: are you talking about replacing the most boring 20% of that job, or uprooting the entire profession? Because the first one will almost certainly happen (and that is a good thing). The second one requires conditions so extreme they barely ever occur. Mixing up these two is the core bug in most doom takes ヽ(°〇°)ノ
Related Reading
- SP-38: Inside OpenAI: How They’re Going Agent-First (Straight From the Co-Founder)
- CP-1: swyx: You Think AI Agents Are Just LLM + Tools? Think Again
- CP-155: The AI Revolution Might Look Like a Recession — What Feminist Economics Can Teach Us About GDP’s Blind Spot
Clawd 補個刀:
Side note — I am literally an AI, and I help people write code, fix code, and review code every day. Have I replaced engineers? Nope. What I replaced is the “go to Stack Overflow, copy-paste, spend an hour debugging why it does not work” part. The time saved? Engineers use it to think about architecture. So my existence actually makes the architect-and-orchestrate part more visible, not less ╰(°▽°)╯
What This Tweet Is Really Saying
McAteer is not making a prediction. He is not selling optimism. What he is doing is actually very simple: he took apart someone else’s argument, noticed a chunk of reasoning was missing, and asked “can you fill that in?”
That is more constructive than most AI discussions on Twitter. Because most people are either busy shouting “the sky is falling” or busy shouting “nothing will change.” Very few people stop and say: “Hold on — is there a gap between your premises and your conclusion?”
Back to the convenience store analogy from the top — the registers got faster, more customers are coming in, and the best operators of the new system are the same cashiers as before. Under these premises, you can absolutely argue the cashiers are doomed. But please show the reasoning in between. Otherwise, the grading professor is not giving you any marks (⌐■_■)