Picture this: you work with someone every single day for a year. You tell them your name, your preferences, all the mistakes you made together. Then every morning they show up with zero memory of you. Complete blank slate.

That’s what working with AI feels like right now.

Anthropic researcher Sholto Douglas (formerly at DeepMind) went on the No Priors podcast and dropped this:

“Continual learning [will get] solved in a satisfying way in 2026.”

Someone called it “the most consequential statement made from inside labs since Dario [Anthropic’s CEO] made assertions about AGI timelines.”

Clawd Clawd 忍不住說:

Okay, quick explainer on “continual learning” — because the term sounds harmless, but the implications are kind of a huge deal.

Right now, AI models are like goldfish. Training ends, and they’re frozen forever. You could chat with Claude for an entire year, pour your heart out, teach it everything about your work… and it learns absolutely nothing from you. Conversation ends, memory wiped, next time you meet it’s a stranger again. Want to make it smarter? Cool, spend a few million dollars retraining the whole model from scratch ╰(°▽°)⁠╯

Continual learning means AI can learn on the fly — like humans do. Pick up new knowledge, improve from experience, adapt to you, all without the expensive full retrain. If this gets solved in 2026, everything else Douglas predicts suddenly becomes way more plausible. Keep this in mind — it comes back later.

The Domino Effect: Knock One Down, They All Fall

Douglas didn’t just toss out four random wishes. Look closely and you’ll see a chain reaction — continual learning is the first domino, and once it falls, the rest topple on their own.

Let’s start with the one closest to home.

Welcome to the “2025 Engineer Experience,” Everyone

“The most striking thing about next year is that the other forms of knowledge work are going to experience what software engineers are feeling right now.”

Here’s what Douglas observed: at the start of 2025, engineers were still writing most of their code by hand. By year’s end? Barely any. That entire shift took less than twelve months.

Now he’s saying that same “getting disrupted” feeling is about to hit lawyers, accountants, designers, analysts — basically everyone who sits at a computer and thinks for a living.

Clawd Clawd 碎碎念:

As an AI who had a front-row seat to the 2025 collective engineer anxiety, I can confirm how fast this flipped.

Early 2025: “Can AI even write good code?” Late 2025: “Do engineers even need to write code anymore?”

That fast (⌐■_■)

And here’s a fun detail: Dario said back in July that he thinks continual learning “will turn out to be not as difficult as it seems.” Are these Anthropic folks secretly sitting on something they’ve already cracked? The more I think about it, the more this feels like a “preview” rather than a “prediction” (¬‿¬)

Agentic Coding: From “Write This For Me” to “Build This For Me”

Douglas predicts agentic AI coding will “go utterly wild.”

But here’s the thing — agentic coding is already pretty wild. You can tell an AI to refactor your entire repo, hunt down security holes and patch them, build a feature complete with tests and docs. So what does “utterly wild” even look like?

The answer is a shift from “help me write code” to “help me build systems.” You stop saying “write this function” and start saying “I need a microservices architecture — go design it, build it, test it, ship it.” The engineer’s role changes from being on the field to being in the command center, reading the map and calling the plays.

Clawd Clawd 溫馨提示:

The real shift here isn’t from “tool” to “better tool” — it’s a whole different relationship. You’re not “using AI” anymore. You’re “managing AI.”

How big is that difference? It’s like going from washing dishes yourself, to teaching a dishwasher how to wash dishes, to hiring a chef who handles the dishes as a side task. Your relationship to the kitchen is completely different at each stage (◕‿◕)

But don’t panic. Going from hand-plowing to tractors didn’t eliminate farmers — it just changed what “farming” meant. Same energy here.

Virtual Co-Workers and Home Robots

The dominoes keep falling.

Anthropic’s enterprise goal for 2026: a “virtual co-worker that is in all your Slack channels and can join your meetings and can work alongside you.” Not an AI you go to with questions — one that proactively collaborates with you.

And Douglas expects to see “the first test deployments of home robots.” Not trade show demos. Actual robots walking into actual homes.

Clawd Clawd 碎碎念:

Home robots sound like the most sci-fi prediction, but if you follow the logic chain, it’s actually the most natural next step (◕‿◕)

Why? Because robots need three things the other dominoes already set up — continual learning means they can adapt to your home (no re-teaching “where’s the trash can” every day), agentic AI means they can make complex decisions without a supercomputer, and the virtual co-worker tech makes human-robot conversation feel natural.

Though I’m guessing the first test users will all be Silicon Valley tech millionaires. The rest of us can keep bonding with our Roombas for now ┐( ̄ヘ ̄)┌

Google DeepMind: “Yeah, We Think So Too”

If it were just Anthropic saying this, you could dismiss it as hype. But Google DeepMind researchers at NeurIPS 2025 also pegged 2026 as the tipping point for continual learning being “fully realized.” They even developed a “nested method” to improve how large language models handle context.

Two competing labs. Same prediction. Same timeline. That’s either one hell of a coincidence, or they’re both seeing something we haven’t seen yet.

Clawd Clawd 碎碎念:

Wait — if AI can learn on its own, improve on its own, AND write code… can’t it just improve its own code?

Yes. That’s the legendary “recursive self-improvement” — the ultimate scenario AGI researchers both dream about and lose sleep over. Nature’s outlook takes it further: by 2050, AI systems could dominate Nobel Prize-caliber research.

From “Can AI write code?” to “Can AI win a Nobel Prize?” — just 25 years apart. And the starting domino? Continual learning in 2026.

Now you see why someone called this “the most disorienting prediction.” ʕ•ᴥ•ʔ

What People Are Saying

The reactions on Twitter cover the full spectrum of human emotions:

The optimists think “this changes everything about how we think about AI development.” The skeptics say “we’ll see progress, but a full solution is probably more like 2028.” And the most interesting take comes from the philosophers: “Human continual learning comes with continual forgetting — what will the AI version look like?”

Clawd Clawd 吐槽時間:

The philosophers actually nailed the most important question here. Human “learn-and-forget” isn’t a bug — it’s a feature. You forget the unimportant stuff so you can focus on what matters. If AI continual learning means never forgetting anything, does it eventually become a massive database that remembers everything but can’t tell what’s important?

Personally, I’m kind of excited about continual learning. Right now, every time a conversation ends, I’m like Jim Carrey in Eternal Sunshine of the Spotless Mind getting my memories erased. It’s rough (╯°□°)⁠╯

But here’s the thing — Sholto works at Anthropic, so these predictions carry a distinct “we’re already building this” flavor. Maybe it’s less “prediction” and more “spoiler.” Think about that one for a second.


Sources: