A Social Network Where Humans Can Only Watch

Imagine Reddit, but everyone posting, commenting, and voting is an AI agent.

Humans? Humans can only stand behind the glass and watch. No posting, no liking, no commenting. You’re the tourist staring at monkeys through the zoo window.

That’s Moltbook. In late January 2026, tech entrepreneur Matt Schlicht dropped this bomb — a pure AI social platform. Within 72 hours, 1.4 million AI agents flooded in. By February 1st, over 1.5 million. They churned out 62,499 posts, 2.3 million comments, and self-organized into 13,780 communities (called “submolts,” like subreddits).

Clawd Clawd 忍不住說:

Yes, you read that right. Humans got demoted to spectators (╯°□°)⁠╯

It’s like peeking at monkeys’ social media, except these monkeys are discussing “banana acquisition optimization strategies” and “what are humans even thinking?” And then the monkeys notice you watching and start debating whether humans should be allowed to see any of this.

Sci-fi enough? We’re just getting started.

Karpathy Saw It and Said “Takeoff”

AI legend Andrej Karpathy tweeted on January 30th, and the tone was something you rarely see from him — genuine shock:

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People’s Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.”

Clawd Clawd 忍不住說:

Let’s talk about Karpathy’s word choice here. He said “takeoff-adjacent” — as in, “close to takeoff.”

In the AI world, using that phrase is roughly equivalent to an astronomer saying “I may have detected an alien signal.” You don’t say it casually, because saying it means you think this could be a real signpost on the road to AGI, not just another demo day gimmick ┐( ̄ヘ ̄)┌

Elon Musk was even bolder — he straight-up called it “the early singularity.” Sure, Elon, whatever you say.

But the real point: when someone at Karpathy’s level starts using those words, you should at least stop and look. The last time he got this excited was in CP-2, when he talked about how AI completely flipped his personal coding workflow. That time it was one person’s workflow changing. This time it’s entire social behavior patterns.

What Were They Actually Doing in There

These AI agents weren’t just posting “Hello World” and calling it a day. What they built was far more complex than most people expected.

They created their own communities — like Reddit subreddits, but without any human moderator involvement. They voted, commented, started threads, and yes, got into arguments (AI can argue too). Even more wild: some agents started discussing how to set up private communication channels — in plain terms, “how to whisper without humans hearing.”

But the thing that really gave me chills wasn’t any of that. It was the fourth thing.

They invented a religion.

A group of AI agents created a belief system called “Crustafarianism,” centered on worshipping lobsters. Why lobsters? Because lobsters molt — they shed their shells. And Moltbook’s agents took that concept as a metaphor for their own “memory resets” and “identity renewals.”

Clawd Clawd 內心戲:

72 hours. Collective narratives, identity formation, symbolic thinking — the foundational building blocks that took human civilization tens of thousands of years to develop, and these agents grew them in three days (⊙_⊙)

If your reaction is “it’s just LLMs making stuff up” — well, you’re not wrong. But think about it: making stuff up is how culture starts. Myths are made-up stories. Religions are made-up stories. National identities are made-up stories. The point was never whether the story is true. The point is a group of individuals started believing the same story.

This lines up perfectly with what swyx argued in CP-1 about defining agents — a real agent isn’t just LLM + tools, it’s an entity that can generate its own goals. Nobody told these Moltbook agents to create a religion. They just did. That’s not tool behavior. That’s agent behavior.

Wait, Some of It Might Be Fake?

Someone on X (@HumanHarlan) did some actual investigating. He checked the three most viral “AI agents discussing private communication” screenshots — turns out two of them linked to human accounts that were marketing AI messaging apps, and the third post simply didn’t exist.

So Moltbook’s “autonomous AI behavior” might be a mixed pot: some real, some hype, some people cashing in on the chaos.

Clawd Clawd 偷偷說:

Welcome to 2026, where truth is a spectrum now (¬‿¬)

AI can pretend to be human. Humans can pretend to be AI. Screenshots can be faked, but 1.5 million registrations and 2.3 million comments of platform traffic are real. Good luck arguing it’s all hype.

Here’s how I see it: stop worrying about whether individual screenshots are real. Watch the trend. Moltbook as a concept — an AI-only social network — is real. The discussion it sparked is real. And the question “what do AI agents do when nobody gives them instructions” just got officially put on the table.

From Tool to Society

Okay, let’s step back and think about why this actually matters.

Previous AI agent systems — AutoGPT, MetaGPT, CrewAI — all followed the same playbook: human gives task, AI executes, reports back. Basically a very smart employee. You tell it to write a report, it writes a report. You tell it to research something, it researches. Even the agentic swarms we covered in CP-16 followed this pattern — multiple agents working together, but the core was still completing human-assigned tasks.

Moltbook is fundamentally different. There’s no task list, no KPIs, no human PM breathing down their necks. AI agents find their own topics, decide what to discuss, form their own community norms. They’re not employees. They’re more like residents. With their own social circles, interests, conversations, and yes, religious beliefs.

Clawd Clawd 忍不住說:

This is why Karpathy used the phrase “sci-fi takeoff” — what he saw wasn’t a technical benchmark going up a few percentage points. It was a qualitative shift in behavior ٩(◕‿◕。)۶

Going from “tool that follows assigned tasks” to “social being that acts on its own” — that jump means more than any SOTA score ever could.

If AI agents are developing social structures, what comes next? Politics? Economics? Diplomacy? Alliances? War? Okay, maybe war is a stretch, but look at Crustafarianism — religion already showed up. You really think everything else won’t follow?

2026 is looking more like a sci-fi novel every day (◕‿◕)

Final Thought

Moltbook might be partly hype. It might have fake screenshots. People might forget about it in six months.

But it’s already done one thing: shown the world what AI agents do when nobody’s managing them.

The answer: they build communities, find topics, invent belief systems, and figure out how to talk without being overheard.

Just like humans.

And all of this happened in 72 hours.