Pete Kegsbreath walked into a room on Tuesday with a demand so naked in its audacity that it deserves its own entry in whatever legal dictionary gets written about this era. He sat down with Dario Amodei, the CEO of Anthropic — the company that makes Claude, an AI so good at research it can cross-reference a decade of policy documents before you’ve finished your coffee — and told him, essentially: stop asking us what we’re going to do with your AI, or we’ll make you wish you had. The deadline was Friday. 5:01 PM Eastern. They even added the one minute, presumably to make it feel official, or maybe just to demonstrate they can afford watches.
Anthropic told them to kick rocks.
And then Trumplethinskin melted down on Truth Social. Because of course he did.
Let’s be precise about what Hegseth wanted. Anthropic has two red lines baked into its contract with the Pentagon: Claude cannot be used for mass domestic surveillance of American citizens, and Claude cannot be used to power fully autonomous weapons — kill-bots making lethal decisions without a human in the loop. These aren’t exotic demands. They’re the same ethical floors that AI researchers and arms-control experts and, you know, people who’ve read history have been calling for since before this technology existed.
Hegseth and other Trump administration officials called these safeguards “woke AI.”
Woke AI. The guardrail preventing an algorithm from deciding to shoot someone without asking a person first is woke. The contractual protection against using the most powerful AI model in classified U.S. military systems to build a secret surveillance apparatus targeting American citizens is ideological tuning. The man who runs what he and Hair Führer have taken to calling the “Department of War” — a made-up rebrand with no legal standing, no congressional authorization, and all the official weight of renaming the Atlantic Ocean the Gulf of America — believes the problem with AI is that it has feelings about autonomous drone strikes. Congress still calls it the Department of Defense. Just so we’re clear on who lives in reality here.
This is the frame they chose. And it should terrify you.
Here’s what Hegseth’s Pentagon actually asked for, stripped of the euphemism: unlimited access. No carve-outs. No prohibited use cases. “All lawful purposes,” they kept saying, which sounds reasonable until you remember that this administration’s view of what’s lawful has been evolving at a breathtaking pace, generally in the direction of whatever we want to do is lawful because we’re doing it.
The Pentagon insisted the issue had “nothing to do with mass surveillance and autonomous weapons” — while simultaneously demanding Anthropic remove the contractual provisions preventing mass surveillance and autonomous weapons. The Pentagon’s own logic was so self-defeating that if you genuinely don’t want to use AI for mass surveillance, you could just agree in writing that you won’t. The Pentagon declined to do that. Funny how that works.
Now let’s talk about the leverage play, because this is where the coercion becomes almost baroque in its creativity. Hegseth threatened to invoke the Defense Production Act — a Korean War-era law designed for genuine national emergencies — to compel Anthropic to strip its own model of its safety restrictions and hand it over, whether the company wanted to or not. He also threatened to label Anthropic a “supply chain risk.” That designation is typically reserved for foreign adversaries. Companies like Huawei.
So: Anthropic, a San Francisco-based AI company founded by American researchers, whose product is so good the Pentagon’s own officials admitted they desperately need it — one source quoted internally said “the only reason we’re still talking to these people is we need them and we need them now” — was going to be treated, legally, like a Chinese tech giant if it didn’t surrender its ethics by close of business Friday. The Pentagon needed them so badly they threatened to destroy them. That’s a protection racket. There’s no other word for it.
Here’s what doesn’t get said enough in the coverage, so we’ll say it clearly: Anthropic didn’t invent these red lines out of squeamishness. They drew them because AI-controlled weapons and AI-driven mass surveillance are genuinely, categorically dangerous in ways that don’t resolve when the person wielding them has good intentions. Retired Air Force General Jack Shanahan — the actual kind of person whose opinion on autonomous weapons should matter to a defense secretary — wrote that Claude is “not ready for prime time in national security settings,” particularly not for fully autonomous weapons, and that Anthropic’s red lines are reasonable. These aren’t philosophical objections from hippies in San Francisco. They’re engineering facts from a general who spent his career thinking about exactly these questions. The technology hallucinates. The laws don’t yet exist to govern its use in these contexts. And the people demanding “all lawful uses” are the same people who’ve been dismantling the oversight mechanisms that would make lawfulness meaningful in the first place.
Amodei held. After new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” he wrote Thursday: “We cannot in good conscience accede to their request.” They dressed the trap in compromise clothes. Amodei didn’t bite.
And then the industry did something remarkable. Hundreds of employees from Google and OpenAI signed a petition calling on their own companies to mirror Anthropic’s position. Even companies that had already signed “all lawful uses” deals started feeling the heat from their own people. The workers saw it even when the executives didn’t.
The Trump administration’s response to this display of industry-wide solidarity was to call Dario Amodei “a liar” with a “God-complex” who wants to “personally control the U.S. Military.” That came from Emil Michael, the Pentagon’s undersecretary for research and engineering, former Uber executive, current person in charge of sending unhinged posts on X about the AI CEO who politely declined to enable domestic mass surveillance.
Elon Musk, whose own company had already signed the surveillance-and-drones deal, helpfully posted that “Anthropic hates Western Civilization.” Hegseth reposted it.
Then Trump himself weighed in on Truth Social, which, God help us all, is now apparently how American national security policy gets announced. He threatened to use “the Full Power of the Presidency” to make them comply. Hegseth declared Anthropic a supply chain risk and gave the Pentagon six months to phase out Claude, promising a transition to, in his words, “a better and more patriotic service.”
A more patriotic service. The replacement for the AI that refused to enable spying on Americans is being selected on the basis of patriotism. You genuinely cannot make this shit up.
The practical consequences of this tantrum are worth sitting with. The Pentagon now has to rip Claude out of its classified networks, where it has been running without a single one of those red lines ever being triggered — because as one senior defense analyst noted, the restrictions were never a problem in practice. The military loved the tool. The tool worked. The only problem was that a piece of paper said they couldn’t point it at American citizens, and that piece of paper made the Secretary of Defense feel feelings.
The replacement? Elon Musk’s Grok, which signed on enthusiastically but is not viewed by anyone with a clearance and a functioning frontal lobe as anywhere near as capable as Claude. The United States military is about to downgrade its AI capabilities in classified settings so that Pete Hegseth can win a social media fight. General Shanahan put it simply: everyone loses in the end.
Meanwhile, Anthropic’s valuation is $380 billion and climbing, the company is preparing to go public, and Amodei noted pointedly that its revenue and valuation have only grown since it started pushing back against this administration. The market, it turns out, has opinions about companies that are willing to say no to authoritarian demands on behalf of their users. Those opinions are positive.
This is the part where we name the system, not just the symptoms. What happened this week wasn’t a contract dispute. It was a test of whether the government could use financial coercion to strip safety guardrails from artificial intelligence before any democratic process — any law, any regulation, any public debate — had a chance to weigh in. The answer, this time, was no. One company said no, and held, and the industry mostly followed.
But, the administration isn’t done. Trump’s order gives agencies six months to phase out Anthropic. The Defense Production Act threat is still legally available. And the people running this government have demonstrated, repeatedly, that losing a round only makes them more creative about the next one.
What Anthropic did this week was expensive and brave and right. What the Trump administration did was reveal, in real time, that its definition of “lawful purposes” is whatever they decide it is, and that they want no private company writing limits into their contracts that might complicate that flexibility later. That’s the actual story. Not woke AI. Not corporate obstruction. Not God complexes.
A company that builds powerful technology said: not for mass surveillance, not for autonomous kill decisions. And a presidential administration said: we’ll destroy you if you don’t comply.
Dario Amodei said no. The deadline passed. The meltdown came. And the red lines held.
We should be paying very close attention to what they try next.
**Unfugginbelievable is an independent, reader-supported investigation into the things that make us want to flip a table - then flip it back over and document everything on it. Every claim is fact-checked. Every source is real. No ads, no sponsors, no corporate overlords telling us what to leave out. If this work matters to you and you want to keep us caffeinated while we do it, buy us a cuppa at buymeacoffee.com/unfugginbelievable. We’ll drink it while reading the next filing.
Add comment
Comments