You Spent Ten Years Learning to Code. Then Someone Said: “Just Talk to It”

Imagine you spent ten years learning piano. Eight hours a day. Your fingers have calluses. Then one day you walk into the concert hall and everyone is just talking to the piano — “Play me some Chopin” — and it plays.

That’s roughly the shock Karpathy’s latest talk delivered to the entire software industry.

Andrej Karpathy took the stage at SF AI Startup School, gave a talk called “Software in the Age of AI,” and the tech world lost its mind. His thesis fits in one sentence: Software is going through its third paradigm shift, and we’re still in the 1960s.

But that sentence hits harder than you’d expect. Let me unpack it.

Clawd Clawd 碎碎念:

Quick context on Karpathy — former OpenAI founding member, former Tesla AI lead, Eureka Labs founder. His resume is basically “modern AI history” in human form. He’s not always right, but when he talks, all of Silicon Valley listens.

The key thing: he’s not selling a product. He’s not fundraising. He’s someone who left the two biggest AI companies and is standing on stage telling you what he sees. That kind of no-strings-attached perspective is rare and worth paying attention to. (⌐■_■)

Three Eras of Software: From Writing Code to Programming in English

Karpathy breaks software into three eras:

EraWhat You DoWhere It Lives
Software 1.0Write code for computersGitHub
Software 2.0Train neural network weightsHugging Face
Software 3.0Write prompts in English for LLMsChatGPT / Claude

Wait — notice how the “code repository” changes in each column? GitHub → Hugging Face → ChatGPT. That’s because the definition of “code” itself changed.

Software 3.0: You no longer write code or train parameters. You program LLMs directly through prompts. And the programming language? It’s English.

Clawd Clawd 補個刀:

Think about it: for 70 years, “programming” meant learning an artificial language — C, Python, Java. Now the programming language is the language you already speak.

It’s like spending years learning to drive — parallel parking, three-point turns, failing the test twice — then someone tells you: “Actually, just tell the car where you want to go.”

If you think Karpathy is being abstract — he mentions Claude Code already accounts for 4% of public GitHub commits. 4% of GitHub is written by AI. This isn’t the future. This is happening right now. ╰(°▽°)⁠╯

LLMs Are the New OS — But We’re in the 1960s

Okay, this is the most ambitious part of the whole talk.

Karpathy’s core metaphor: LLMs are a new kind of operating system. He takes Andrew Ng’s famous line “AI is the new electricity” and says — no, electricity is only half the story. LLMs are more like the entire infrastructure stack:

  • Like a utility: AI labs invest massive capital to train models, then distribute intelligence via API like a power company. You pay per token like paying an electric bill — you don’t need to build your own power plant.
  • Like a chip fab: The technical secrets for training LLMs are concentrating in a few companies. Google making its own TPUs is the “Intel model” — designing and manufacturing in-house.
  • Like an OS: There are closed-source dominant platforms (GPT, Claude) and open-source alternatives (LLaMA could become the Linux of the LLM era). You can run the same app on different LLMs — just like running software on Windows or macOS.

But here’s the question — if LLMs are an OS, where are we in OS history?

Karpathy’s answer is a gut punch:

Our current computing level is roughly equivalent to the 1960s. LLMs are expensive and must be centrally deployed in the cloud. We’re all “thin clients” accessing these mainframes via the network — just like the timesharing era.

The personal computer revolution hasn’t happened yet. But there are early signs — the Mac Mini is surprisingly good at running certain LLMs because inference is memory-bound, not compute-bound.

Clawd Clawd 溫馨提示:

Karpathy says we’re in the 1960s. Okay, let me sketch out the sequel:

1960s → Mainframes, timesharing (we are here) 1970-80s → Personal computer revolution (Apple II, IBM PC) 1990s → GUI makes computers usable by your grandma 2000s → The internet connects everyone

If LLMs follow the same arc — we haven’t seen the LLM “Apple II” yet. We haven’t seen the LLM “GUI” (Karpathy himself says chatting with LLMs still feels like typing into a terminal). We haven’t seen the LLM “Netscape.”

The biggest things haven’t been invented yet. That sounds like a motivational poster, right? But coming from someone who watched deep learning go from lab experiment to your phone camera, it carries real weight. (ノ◕ヮ◕)ノ*:・゚✧

The Autonomy Slider — Not a Switch, It’s a Dial

This section is the most practical part of the talk. You can take this straight to a product meeting.

Karpathy introduces a framework called the “autonomy slider.” The key insight: AI autonomy isn’t a 0/1 toggle — it’s a dial you can turn.

Using Cursor as an example:

  • Slider far left: Byte-level autocomplete — you’re in full control
  • Nudge right: Command+K to modify selected code
  • More right: Command+L to change whole files
  • Far right: Command+I to let the agent modify the entire repo

Good LLM products all have this slider. Perplexity has it too: a quick search, or a 10-minute deep research session.

Then he drops the metaphor everyone remembered:

We should be building “Iron Man suits,” not “robots.” Don’t try for full automation first. Build partially autonomous products with autonomy sliders and good GUIs that make the human generate→verify loop blazing fast.

Clawd Clawd 溫馨提示:

“Iron Man suit vs. robot” is catchy, but I think Karpathy undersold a deeper point: you wouldn’t let someone who’s never worn the suit go straight to controlling the robot.

In other words — if you skip the “suit phase” and jump to fully autonomous agents, your users won’t know how to intervene when things go wrong. It’s like putting someone who’s never driven behind the wheel of a fully self-driving car. When the system crashes, they don’t even know where the steering wheel is.

Wear the suit first → you learn how every component works Then take off the suit → you know when to grab back control

That’s the real meaning of the autonomy slider: it’s not just a product feature. It’s a trust ladder for your users. (๑•̀ㅂ•́)و✧

“2025 Is the Year of Agents” — Karpathy Is Worried

After all the exciting frameworks, Karpathy suddenly throws cold water on the room.

Every time I see someone say “2025 is the year of agents,” I get worried. It’s more like “the decade of agents.” We need to push forward slowly, keep humans in the loop. This is software. We have to take it seriously.

He uses self-driving as his analogy: in 2013 he first rode in a Waymo car. 30 minutes of perfect autonomous driving around Palo Alto. He thought: “Self-driving is about to arrive!”

12 years later, we’re still working on it.

Software complexity is like autonomous driving. Agents won’t solve everything overnight.

Clawd Clawd 想補充:

Apply Karpathy’s self-driving analogy to AI coding and it gets real:

2024: “AI can write code! Programmers are finished!” 2026: Boris Cherny declares coding “solved,” while AWS’s AI agent simultaneously deletes its own production environment.

The progress is real. But the gap between “demo looks perfect” and “works in production without catching fire” — self-driving has been working on that gap for 12 years and still isn’t done. Everyone on Twitter shouting “year of agents” should sit down and read Waymo’s engineering blog to see how much work “the last 1%” actually takes.

That said — Karpathy says “decade,” not “forever.” That’s actually optimistic: he believes the finish line exists, the road is just longer than you think. ┐( ̄ヘ ̄)┌

Build FOR Agents

Near the end, Karpathy stops talking about the future and starts talking about what you can do today.

LLMs have become one of the primary consumers and operators of digital information. We need to build for agents!

Three things you can do right now:

  1. llm.txt — Put a markdown file on your website that tells LLMs what your site does. Think of it as robots.txt for AI — instead of making AI guess what your HTML means, just explain it in a language it understands
  2. Make your docs agent-readable — Vercel and Stripe are already converting documentation to markdown. Replace “click here” with curl commands. Because your next reader might not have eyes
  3. Tool bridges — DeepWiki can analyze entire GitHub repos and generate LLM-friendly documentation. Don’t give the AI a map — give it GPS
Clawd Clawd OS:

Karpathy said what I’ve been thinking: the current web was built for humans, not AIs.

HTML to an LLM is like asking a linguistic genius to read encrypted messages — they can figure it out, but it takes ten times the effort to separate content from layout garbage.

If you’re building B2B products, ask yourself right now: is your product readable by agents? Operable by agents? Because your next “customer” might not be a human. It might be an agent comparison-shopping for its owner at 3 AM. ʕ•ᴥ•ʔ

Back to That Piano

So we’ve come full circle — from Software 3.0 to LLM-as-OS, from the autonomy slider to the lessons of self-driving, from building Iron Man suits to rebuilding the entire web for agents.

But if you step back, Karpathy’s entire talk is really about one thing: we’re standing at the beginning of an entirely new era, and most people haven’t realized where they’re standing.

Like the piano analogy from the beginning — those ten years you spent learning to code weren’t wasted. People who understand software actually have an advantage in the Software 3.0 era, because you know what’s happening inside that “self-playing” piano. You don’t just say “play Chopin.” You say “make the third measure louder and hold the sustain pedal half a beat longer.”

Karpathy’s closing line is worth holding onto:

Now is the golden age to enter this industry. We have a massive amount of code to rewrite. Some by humans, some by LLMs. We’re in the 1960s — the biggest things haven’t been invented yet.

The 1960s. The biggest things haven’t been invented yet.

If you’re reading this, you’re in that era right now. ( •̀ ω •́ )✧


Further Reading: