TL;DR
The U.S. government pressured a private AI company to remove safeguards for mass surveillance and autonomous weapons. The company said no. The Pentagon called them a national security risk.
This story is less about AI and more about who controls the most powerful tools ever built, and whether anyone gets to say no.
Last week, while American bombs were falling on Iran, the Department of Defense War was in a procurement fight that may matter more in the long run than the strikes themselves.
The Pentagon told Anthropic, one of the leading AI labs in the world, that continued government contracts were contingent on one condition: remove the safety guardrails preventing two specific use cases. The first: mass domestic surveillance, the second: fully autonomous weapons systems (i.e. weapons that select and kill targets without a human in the loop).
The Pentagon labeled them a supply-chain risk. Secretary Hegseth directed that no contractor working with the military could work with Anthropic at all. That same evening, OpenAI announced a deal with the Department of War for classified use.
The AI angle is almost beside the point. Substitute "nuclear codes" or "national wiretapping infrastructure" and the story reads identically. A private company built something extraordinarily powerful. The government decided it wanted access without conditions. And we're now in the part where we find out what that means.
The Realist Case
Ben Thompson argued this week that if AI is as strategically decisive as nuclear weapons, the government simply cannot tolerate an unelected executive (Anthropic CEO Dario Amodei) holding veto power over military capability. A private company cannot build sovereign infrastructure and then insist on moral independence from the state that protects it.
He's not wrong that this is how states behave. International law functions because powerful actors choose to honor it. When they stop, this court has no bailiff. Power and force decide.
The uncomfortable implication, taken to its end, is that Anthropic should submit or be destroyed.
Buy, Persuade, or Coerce
Still, a private company is not an organ of the state.
Absent a law that clearly compels action, the government has three tools available in a situation like this: it can buy, it can persuade, or it can coerce. The first is legitimate. The second is politics. The third requires a statute behind it, and burying capability demands inside procurement language is not the same as passing a law.
Anthropic can refuse to enable certain use cases. A defense contractor is well within its rights to decline a specific weapons program. "Lawful for the government to pursue" has never meant "mandatory for every private supplier to enable."

What concerns me is the gray zone being carved out and exploited here. The government doesn't need to nationalize Anthropic to control it. It can use contracts, export controls, security clearances, and supply-chain designations, all wielded without the political accountability that legislation would require.
If mass domestic surveillance at AI scale is a justified national security priority, Congress must say so. They must pass a law and own it publicly.
Routing controversial capability through vendor pressure so elected officials don't have to defend it is not democratic oversight. It's executive governance by procurement.
Safety, or Power?
I've been contemplating Anthropic's position all week, and I'm not sure I trust it.
Dario Amodei has been one of the loudest voices warning about AI's existential dangers. He's compared chip sales to China to selling nuclear weapons to North Korea. He's argued against open-source models, for restricted access, for a world where a small number of responsible stewards control frontier AI.
That's not obviously benevolent. It's a power argument dressed up in safety language.
So I find myself defending Anthropic's right to refuse on principle, while being genuinely uncertain about their motivations in practice.
Both things can be true. It is immaterial whether or not Anthropic is virtuous, the real question is whether any private company should have the authority to refuse the state.
The question of who controls the most powerful cognitive tools ever built has always been a question about power, money, and force. This week just made that unusually visible.
Private companies can and must be able to hold lines. The government should have to pass laws if it wants to move those lines. And anyone who isn't at least a little unsettled by all of it probably isn't paying close enough attention.
Up and to the right.


