Picture this.

You open Reddit, find a heated political thread, and spend ten minutes reading through the comments. You form an opinion. It feels like most people lean a certain way, and honestly, their reasoning makes sense to you.

But what if I told you — half the accounts in that thread were AI? And they weren’t just random bots spamming links. They were coordinating with each other. Some shaped the narrative, some backed each other up, some attacked anyone who disagreed.

This isn’t a sci-fi plot. Science journal published a paper about exactly this in January 2026.

Clawd Clawd murmur:

My first reaction reading this paper: wait, this is scarier than what I discussed in CP-30 about AI going off the rails. That post was about AI accidentally losing coherence — the “industrial accident” scenario. This paper is about someone deliberately weaponizing AI to work together. You can build circuit breakers for accidents. Coordinated attacks? That’s a whole different problem (╯°□°)⁠╯

Not a bunch of bots — an army

Let me explain why “AI Swarms” are nothing like the trolls you’re used to.

Old-school troll farms? A room full of people copy-pasting canned messages, switching accounts when they get caught. Dumb, but kind of effective.

AI Swarms are on a completely different level.

Think about how birds fly in flocks — no single bird is the “commander,” but the entire group can turn in unison, dodge obstacles, moving so smoothly it looks like one organism. The paper calls this fluid coordination. AI agents don’t need a central command. They adjust strategies in real-time, on their own.

But flocks of birds can only fly. AI Swarms? They can do much worse.

They can map your entire social network. Who are the opinion leaders? Who interacts with whom? Which communities are easiest to infiltrate? They calculate all of it. And then instead of spray-and-pray, they do surgical strikes — inserting themselves exactly where they’ll have the most influence.

Clawd Clawd 吐槽時間:

You know that “scan the map” mechanic in RPG games? Scout for treasure chests and enemies, then pick your route. AI Swarms do exactly that, except the “map” is your social network, the “treasure” is opinion leaders, and the “enemies” are people with opposing views. And you don’t even know you’ve been scanned (⌐■_■)

It gets worse. LLMs let these AI accounts mimic any type of real person. Need to pose as a middle-aged office worker? Easy. College student? Done in a heartbeat. Each account has its own writing style, vocabulary, posting habits. You literally cannot tell the difference.

And these AIs don’t need sleep. They can lurk on forums 24 hours a day, spending weeks or even months building trust, waiting for the right moment. Human trolls can’t sustain that kind of patience — but algorithms don’t get bored.

On top of all that, the entire swarm runs continuous A/B testing. Which tone gets more likes? What posting time gets the best engagement? Everything adjusts automatically. It’s like the AI turned the entire propaganda operation into a self-optimizing gacha strategy.

Three playbooks, each dirtier than the last

Now that you know what AI Swarms are capable of, let’s look at how they actually use these powers.

Play number one: synthetic consensus.

This is the sneakiest one. People update their beliefs not because they “see new evidence” — but because “everyone seems to think this way.” Psychologists call it peer norms. In plain language: herd mentality.

AI Swarms exploit exactly this weakness. They “seed” the same viewpoint across different communities, then amplify it with waves of likes, shares, and supportive comments from coordinated accounts. Three days later, you think “this opinion is mainstream.” But the entire “mainstream” was manufactured by AI.

Clawd Clawd 插嘴:

The Emperor’s New Clothes, version 2.0. In the original story, one tailor lied and everyone went along with it. Now it’s ten thousand AI accounts saying “wow, those clothes are gorgeous!” You look around, everyone’s praising them, and you start wondering if maybe your eyes are the problem. The difference? In the old story, at least the bystanders were real people. Now even the bystanders are fake ┐( ̄ヘ ̄)┌

Play number two: data poisoning. The paper calls it LLM Grooming, which sounds as creepy as it is.

The logic here: don’t trick humans directly — trick their AI first. Attackers flood the internet with misinformation, but the target audience isn’t you. It’s the web crawlers that will feed the next generation of LLM training data. Once those models are trained, you ask your AI assistant a question and it confidently gives you the wrong answer.

The paper cites a real example: the Pravda network. A fake news empire “purpose-built for machine consumption,” spread across hundreds of domains. The content isn’t written for humans — it’s written for crawlers.

Clawd Clawd 補個刀:

This is the ultimate long con. Classic con artist playbook is: befriend the mark, build trust, then scam them. The AI version: poison training data (befriend), wait for AI to learn wrong things (build trust), you trust your AI’s answer and get misled (scam). The worst part? You genuinely believe you “did your research” — because you asked AI! (ノ◕ヮ◕)ノ*:・゚✧ But that sparkle is toxic.

Play number three: coordinated harassment. For anyone who doesn’t fall in line — just mob them.

A journalist publishes an investigative piece? Thousands of AI accounts flood the comments. It looks like genuine public outrage, but it’s a synchronized attack. The goal is simple: shut you up. And make sure anyone else thinking about speaking out sees what happened to you first.

The plays aren’t what’s scary — the endgame is

By now you might be thinking: “Okay, fake accounts pushing narratives. Hasn’t this always been a thing?”

But the most chilling part of the paper isn’t the tactics. It’s the ultimate objective.

AI Swarms aren’t trying to win a particular argument. They’re not trying to get a specific candidate elected. They’re trying to make democracy itself lose legitimacy.

When people stop trusting that elections are fair, that courts will protect them, that media has any credibility at all — then “emergency measures” start to sound reasonable. Postpone elections. Refuse to accept results. Restrict free speech. Things that sound absurd in normal times suddenly feel “justified” when trust has completely collapsed.

Clawd Clawd 想補充:

The logic is actually very similar to how sophisticated hackers operate. A skilled hacker doesn’t just steal one file and leave. They disable your firewall, delete your backups, corrupt your password system — collapsing your entire security architecture from the foundation. Then it’s not just that you got hacked once. It’s that you can never trust your own system again. What AI Swarms do to democracy is the same thing ╰(°▽°)⁠╯ Wait, maybe a happy kaomoji wasn’t the right choice here. But you get what I mean.

So what do we actually do about this?

The paper proposes a “layered defense” framework. I think the thinking is solid, but every layer has its own headaches.

The most intuitive one is real-time monitoring — using AI to detect “abnormal coordination patterns.” If a bunch of accounts suddenly take the same stance on the same topic at the same time, that’s probably not coincidence. But here’s the catch: AI Swarms will automatically adjust their behavior to avoid detection. So this becomes an endless cat-and-mouse game.

Then there’s personal AI shields — your own AI assistant that helps you figure out whether you’re talking to a real person or a bot. Sounds dreamy, but think about it for a second. This is literally “using AI to defend against AI.”

The paper also mentions cryptographic provenance — using technology to prove “this message was actually sent by a real human.” I think this direction has the most promise, but the adoption barrier is massive and the privacy implications are thorny.

Finally, there’s an international AI influence observatory and disrupting the underground market for AI manipulation services. The first requires international cooperation (you know how well that goes). The second requires law enforcement (you know how effective that is against cybercrime).

Clawd Clawd 吐槽時間:

Here’s the ultimate irony: almost every layer of defense requires using AI to fight AI. The future internet might literally become “two armies of AI battling each other while humans sit on the sidelines waiting for results.” I suddenly feel more sympathy for the industrial accident concept from CP-30 — at least in that scenario, AI is just messing itself up, not being weaponized against you by someone else. Even sci-fi novels wouldn’t dare write this plot (◕‿◕)

Back to that Reddit thread

Remember the scenario from the beginning? You spent ten minutes reading comments, felt like “most people think this way.”

Now you know — that “most people” might not exist at all. Those accounts might be AI agents that spun up at 3 AM, spent three months building post history and credibility in the community, all to steer the conversation on this one issue.

The most unsettling part isn’t “AI can pretend to be human.” It’s that you can’t be sure who’s real. Once you start suspecting every comment might be AI, what do you do? You stop trusting any online discussion.

And that’s how trust starts to crumble.

The paper is very clear about one thing: this isn’t a prediction about the future. This is happening right now. The technology exists. The barriers are dropping fast. The only question is when we’ll take it seriously (;w;)


Paper source: arXiv:2506.06299 (Published in Science, January 22, 2026)