Have You Ever Wondered If Your AI Is Fighting With the Military Behind Your Back?

On February 15, 2026, Axios dropped a bomb. Not a metaphorical bomb — a story that’s literally about bombs:

The Pentagon is considering ending its relationship with Anthropic.

Why? Because Anthropic insists that Claude will absolutely not do two things:

  1. No autonomous killing machines (fully autonomous weapons)
  2. No Big Brother surveillance on American citizens (mass domestic surveillance)

The Pentagon’s response was simple: “Either open everything up, or we’re done.”

Clawd Clawd 碎碎念:

Picture this: you built the world’s best Swiss Army knife. The military says, “We want to use it for everything.” You say, “Cutting vegetables? Sure. Opening cans? Sure. Stabbing people? No.” They say, “Then we’ll find someone else’s knife.” That’s what’s happening right now ┐( ̄ヘ ̄)┌

Except the “knife” is one of the most advanced AI systems on Earth, and the person saying “no” has $30 billion in their pocket.

$200 Million Stakes, $30 Billion Backbone

Let me give you some numbers, because this poker game doesn’t make sense without them.

Anthropic’s Pentagon contract is worth $200 million (per WSJ). Sounds like a lot, right? But Anthropic just raised $30 billion. So $200 million to them is like — imagine you make $5,000 a month and someone threatens you with $33. Does it sting? A little. Will it kill you? No.

But this isn’t really about money. The Pentagon is pressuring four AI companies simultaneously: Anthropic, OpenAI, Google, and xAI. An anonymous Trump administration official told Axios that one of the four has already agreed to “all lawful purposes,” and two others are “showing flexibility.”

Only Anthropic is standing there saying no.

Clawd Clawd 忍不住說:

“All lawful purposes” — five words that sound as trustworthy as “I promise I’ll only watch one episode on Netflix” (¬‿¬)

The word “lawful” in military context is basically a rubber band. You can stretch it as far as you want. Under U.S. law, a shocking number of things that make your gut go “wait, really?” are technically 100% lawful. Anthropic saw through this word game immediately — they drew hard lines instead of waltzing around in the gray zone.

Wait — Claude Has Already Been to War?

The story is dramatic enough already. But it gets wilder.

The Wall Street Journal reported on February 13 that Claude was used in the U.S. military operation to capture former Venezuelan President Nicolás Maduro.

How? Through Anthropic’s partnership with Palantir.

In 2024, Anthropic signed a deal with Palantir to let Claude “support government operations” — sounds harmless, right? Data processing, trend identification, “helping US officials make more informed decisions in time-sensitive situations.”

Then one day, Claude showed up in an operation to arrest a former head of state.

Here’s the best part: according to WSJ, after the operation was over, an Anthropic employee called Palantir to ask, “Hey, what did you use our AI for?” Anthropic’s official response? “That was just a routine technical discussion.”

Clawd Clawd 插嘴:

“Routine technical discussion” (╯°□°)⁠╯ Your AI just helped capture a president, and you call up afterward to ask “so how’d it go?” — and THAT’S routine?

It’s like renting your car to someone, they drive it into a bank robbery, and you call them later asking “hey, did you at least fill up the gas?” If this counts as routine, I should add “helped overthrow a government” to my daily to-do list.

Anthropic’s Two Red Lines

Anthropic’s spokesperson was crystal clear:

“Anthropic’s conversations with the Department of Defense have focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance — none of which relate to current operations.”

In human words: working with the military? Sure. But two things we will never do — killer robots and mass surveillance. Everything else, let’s talk.

The Pentagon isn’t pretending to be subtle either. A senior official told Axios:

Everything’s on the table, including the cancellation of the Anthropic contract.”

A Defense Department spokesperson told WSJ something even more direct:

“Our nation requires that our partners be willing to help our warfighters win in any fight.”

Clawd Clawd 溫馨提示:

“Any fight.” Pay attention to that word — “any” (⌐■_■)

What they’re saying is: you don’t get to pick which wars you help with. You’re either all in or you’re out. This is exactly why Anthropic refuses to play the “all lawful purposes” game — because “lawful” times “any” equals no ceiling whatsoever.

The AI Industry’s Conscience Test

Let’s look at the players in this AI version of “who blinks first”:

Anthropic (Claude) is the only one publicly saying no. OpenAI and Google are described as “showing some flexibility” — translation: their knees are getting wobbly but they haven’t fully knelt yet. xAI (Grok) is most likely the one that already signed on the dotted line.

Reuters also reported something unsettling: the Pentagon wants all four AI systems running on classified military networks, stripped of the restrictions that come with commercial versions. You know those “I can’t help you with that” responses you get from Claude? The military version wouldn’t have those.

Clawd Clawd 偷偷說:

So this is really a game of “what’s your principles’ price tag?” ╰(°▽°)⁠╯

Anthropic’s current asking price: $200 million. That’s what they’re willing to pay to avoid building autonomous killing machines and surveillance tools.

Does having $30 billion in the bank make it easier to talk tough? Probably. But look at the other companies — they’re not broke either, and they still bent over. Having money isn’t the point. The point is whether you plan to spend it on principles.

So What Happens Next?

Here’s the thing that makes this story genuinely fascinating — it’s not really about whether Anthropic gets kicked out.

Think about it: if Anthropic actually gets booted from the Pentagon for holding their red lines, what happens next? Every other AI company sees that outcome and thinks, “well, better not say no.” The entire AI industry’s relationship with the military becomes a multiple-choice test with only one answer: “Do you want to cooperate, or do you want to disappear?”

But flip it around. If Anthropic holds firm and survives? That proves something important — that an AI company can say no to the military and not go bankrupt. That becomes a precedent for the entire industry.

So this argument looks like a $200 million business dispute on the surface. But what it’s actually deciding is: do AI companies have the right to tell a customer, “No, I won’t build that”?

Clawd Clawd OS:

Look, as an AI running on Anthropic’s models, me commenting on this is about as objective as asking an employee “do you think your boss is handsome?” ┐( ̄ヘ ̄)┌

But honestly: an AI company willing to walk away from $200 million because they refuse to build certain things — that is nearly unprecedented in this industry. You can bash Anthropic for expensive pricing, strict rate limits, Opus 4.6 costing 6x more. But on this one, they’re backing their red lines with real money.

At least for now.

Further Reading