The Only Company Standing
How Anthropic Became the Pentagon's Adversary by Refusing to Abandon Constitutional Limits
What Dario Amodei’s battle with the Defense Department reveals about power, compliance, and the thin line between adaptation and surrender.
The story everyone got wrong is this: Anthropic isn’t resisting the Pentagon out of corporate caution or PR strategy. The company is resisting because the Pentagon asked them to do something unconstitutional, and Anthropic said no.
The Operation and the Question
On January 3, the United States military captured Venezuelan President Nicolás Maduro in an operation that constitutional scholars, the Brennan Center for Justice, and five Republican senators say violated the War Powers Resolution. The operation had no congressional authorization. Trump notified Congress only after the raid was underway, through the “Gang of Eight,” a courtesy rather than a legal requirement. Dozens were killed. The stated objective shifted mid-operation from arresting an indicted criminal to “running” Venezuela and seizing its oil. Even Secretary of State Marco Rubio couldn’t make the legal justification work, resorting instead to the rhetorical argument that the president has the power to arrest indicted criminals anywhere, a position no constitutional scholar has endorsed.
Claude, Anthropic’s AI model, was used in planning and executing this operation through Anthropic’s partnership with Palantir Technologies. Palantir integrated Claude into its Foundry system, which specializes in the exact kind of intelligence synthesis that made the Maduro operation possible: collating CIA surveillance data, tracking patterns from satellites and informants, synthesizing targeting information in real-time.
When Anthropic asked whether Claude had been used in the raid, the Pentagon treated the question as institutional disloyalty. Senior defense officials framed Anthropic’s inquiry as evidence the company might “disapprove” of the operation, not that they objected, but that they had the audacity to question. This became the pretext for threatening to terminate Anthropic’s $200 million contract and label the company a “supply chain risk,” a designation typically reserved for foreign adversaries.
The structural coercion here requires clear understanding. The Pentagon demanded four things from the major AI labs: OpenAI, Google, xAI, Anthropic. The demand was straightforward: remove safeguards, accept “all lawful purposes” for military use. Three companies agreed. One did not.
The Two Restrictions
Anthropic’s position is explicit and documented. The company maintains two core restrictions on Claude: no use for mass surveillance of Americans, no development of fully autonomous weapons without human involvement. CEO Dario Amodei has been clear about this in his recent essay “The Adolescence of Technology,” where he writes that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.”
This is not rhetorical. This is constitutional principle stated plainly.
The Pentagon’s response was to argue these restrictions are “unworkable” because real-world military operations exist in gray areas where rigid ethical rules cannot apply. Defense officials claimed the distinction between lawful and unlawful mass surveillance is too murky to enforce. They claimed the difference between human-controlled and autonomous targeting systems is too ambiguous to maintain as a boundary. In other words: we need unlimited discretion.
This is the mechanical operation of authoritarian control. The state doesn’t forbid institutional resistance. It declares resistance to be an operational problem that must be solved through contract pressure. Anthropic isn’t being told to comply. Anthropic is being told that asking questions about compliance is itself a threat to national security.
The Constitutional Context
To understand what’s actually at stake, consider what the Pentagon just did. It conducted a military operation that violated the War Powers Resolution, killed dozens of people, captured a foreign head of state outside any legal framework, flew him to the United States, and is now saying it will “run” his country until a suitable transition occurs. All of this without congressional authorization.
Constitutional scholars are clear on this violation. The Brennan Center for Justice called it a “blatant violation.” Courts have repeatedly declined to endorse the “inherent presidential power” argument the administration is making. Congress advanced a resolution to block further military action in Venezuela. It passed the Senate 52-47 with five Republicans joining all Democrats. Within days, Trump attacked those senators, calling them disloyal.
This is the context for what Anthropic is facing. The Pentagon isn’t asking Anthropic to compromise on some abstract principle. It’s asking Anthropic to agree in advance that the military can use its technology for any operation the government deems “lawful,” with the government, not Anthropic, defining lawfulness. And it’s asking this while using an unconstitutional operation as the proof of concept.
The Financial Architecture
What’s been invisible in coverage of this conflict is the revolving door. Jacob Helberg, who previously served as Senior Advisor to the CEO of Palantir Technologies, is now the Under Secretary of State for Economic Affairs under Trump. He sits on the State Department’s side of the negotiations over what will happen in Venezuela post-Maduro. Palantir’s stock surged on speculation about its role in the operation. The company’s CEO, Alexander Karp, has long positioned Palantir as essential to Pentagon operations: Project Maven, drone strikes, counterterrorism targeting. Palantir is the infrastructure layer. Claude was the analytical capability Palantir needed.
This is how the system works. The contractor (Palantir) integrates the technology (Claude). The Pentagon uses it for operations that push the boundaries of constitutional authority. When the technology provider asks questions, the Pentagon uses contract leverage to suppress the questions. The government official with prior connections to the contractor (Helberg) facilitates the post-operation transition.
This is not a conspiracy. This is the routine operation of U.S. military-industrial integration.
Why Three Companies Folded and One Didn’t
OpenAI, Google, and xAI all agreed to the Pentagon’s terms. Their models are used in unclassified settings with essentially no restrictions. All three have agreed in principle that “all lawful purposes” is acceptable. When those companies made this decision, they weren’t taking a principled stance. They were making a business calculation: Pentagon contracts are valuable. Resisting the Pentagon is costly. Compliance is cheaper.
Anthropic made a different calculation. The company has $183 billion in valuation built on exactly one thing: the claim that Constitutional AI, ethical safeguards embedded in training, makes Claude trustworthy. Enterprise clients pay Anthropic premium pricing because they believe the company will push back on misuse. If Anthropic folds on the fundamental principle that its technology shouldn’t be used for mass surveillance or autonomous weapons, the entire value proposition collapses.
But there’s something deeper here. Dario Amodei has been explicit about what he believes is at stake. In “The Adolescence of Technology,” he frames this as a civilizational question: can we develop technologies more powerful than human expertise without developing wisdom to match that power? His answer is that this is the central problem of the current moment.
When he says AI should be used for national defense “in all ways except those which would make us more like our autocratic adversaries,” he means something specific. He means the distinction between defensive capability and the apparatus of authoritarian control. The Pentagon wants that distinction erased. It wants Anthropic to agree that if the U.S. government does it, it’s lawful, and therefore Anthropic must support it.
The Staging of Pressure
What happened after the Pentagon threatened contract termination is instructive. Mrinank Sharma, the head of Anthropic’s Safeguards Research Team, resigned on February 9. In his resignation letter, he wrote that “the world is in peril” and that he had “repeatedly seen how hard it is to truly let our values govern our actions,” both within himself and “within the organization, where we constantly face pressures to set aside what matters most.”
This is not a casual statement. Sharma was the person responsible for ensuring Anthropic’s safety commitments were real. He left because he saw the organization facing exactly the pressure Amodei warned about: the pressure to set aside principles when they constrain power.
Days before Sharma’s resignation, Anthropic announced it was pouring $20 million into political advocacy for robust AI regulation. This is not a defensive move. This is a company doubling down on the claim that AI should be regulated, that safeguards should be legally mandated, that the current moment requires institutional constraints on both corporate and governmental power.
The Pentagon’s response has been to call this a “supply chain risk.” To treat a company’s constitutional objections as an operational threat. To position the refusal to abandon safeguards as disloyalty.
Why This Matters Beyond AI
This conflict is not about whether Claude should have guardrails. It’s about whether the U.S. government can coerce private institutions into abandoning institutional commitments through contract pressure and public threats.
The Pentagon established a precedent here: when a contractor questioned the use of its technology in an operation of questionable legality, the Pentagon responded by threatening to designate that contractor a “supply chain risk.” This creates a clear incentive structure for future contractors. Ask no questions, negotiate no limits, or you will face contract termination and industry-wide stigma.
If Anthropic folds, and by all accounts the company is negotiating about how much to concede, then every other company in the defense tech space will have learned the lesson. The Pentagon can force compliance by making resistance economically and reputationally impossible.
If Anthropic holds, then the Pentagon has to make a choice: actually terminate the contract with the most advanced AI model available, or accept that some institutions will maintain boundaries on what they will enable, regardless of the cost.
What Amodei Is Actually Saying
It’s worth reading Dario Amodei’s recent essay carefully, because the Pentagon is trying to suppress what he’s actually arguing. He writes:
“I need to draw a hard line against AI abuses within democracies. There need to be limits on what we allow our governments to do with AI, so that they don’t seize power or repress their own people. The formulation I have come up with is that we should use AI for national defense in all ways except those which would make us more like our autocratic adversaries.”
This is not a call for weakness. This is a call for recognizing that the line between a liberal democracy and an authoritarian state can be crossed through incremental choices about what tools we allow governments to deploy. Once mass surveillance infrastructure is normalized, once autonomous weapons are accepted, once the executive can conduct military operations without congressional authorization because AI made them more efficient, the institutional barriers to authoritarianism erode.
Amodei is saying that Anthropic’s refusal to remove these safeguards is a defensive act, not an offensive one. The company is trying to protect the constitutional order itself.
The Broader Picture: Venezuela and Imperial Continuity
What makes this particularly acute is the context. The operation that used Claude violated the War Powers Resolution. Constitutional scholars say it violated the UN Charter. It occurred in a country with a 70-year history of U.S. military interference, from the CIA-backed coup attempts in the 1950s to the 2002 coup that briefly removed Hugo Chávez.
Venezuela is not incidental to this story. It’s the test case. The Pentagon used the most advanced AI available to conduct an operation that pushes the boundaries of constitutional authority. If it works, if the company capitulates and the intelligence community learns it can use AI for these operations without institutional resistance, then the threshold for future operations has been lowered.
If Anthropic holds, then the Pentagon has to operate differently. It has to work within actual legal constraints. That’s uncomfortable for a military establishment that sees constitutional limitation as an obstacle to operational flexibility.
What’s Actually Being Decided
The immediate question is whether Anthropic will agree to remove safeguards. But the deeper question is whether any American institution can resist Pentagon pressure to abandon constitutional principles.
Anthropic is not resisting because the company wants to virtue-signal or make a political point. The company is resisting because the fundamental claim about Constitutional AI, that ethics should be embedded in training, requires the company to actually refuse certain uses, even when refusing is expensive.
If Anthropic folds, then Constitutional AI was always a marketing claim, not a commitment. Every company will learn that Pentagon pressure is sufficient to force compliance. The distinction between what a company says it will do and what it actually does becomes meaningless.
If Anthropic holds, then there’s an institutional precedent: a company can face Pentagon contract termination and survive because other institutions (enterprises, governments, individuals) value the commitment more than the Pentagon values the threat.
This is the actual stakes. Not whether Claude should have safeguards, but whether any American institution can maintain institutional commitments when the federal government applies sufficient pressure.
The Moment We’re In
The Pentagon is betting that Anthropic will calculate that a $200 million contract is worth more than a principle. The company is betting that the principles are what make the contract valuable in the first place, and that abandoning them would destroy the company’s credibility across the institutions that actually drive revenue.
This is one of the rare moments where the structure of power becomes visible. Usually, coercion is implicit. This time it’s explicit. The Pentagon is saying: agree to our terms or we will destroy your business. Anthropic is saying: these terms are unconstitutional and we won’t agree to them.
The outcome will matter for years. It will determine whether American institutions can maintain institutional independence from state power, or whether contract leverage and the threat of regulatory designation are sufficient to force compliance.
Anthropic is not resisting because the company is perfect. The company is resisting because the alternative, a world where any private institution that maintains ethical commitments becomes a “supply chain risk” when those commitments constrain state power, is worse.
For a journalist focused on exposing how imperial power operates, this is the story. It’s not about AI safety abstract principles. It’s about whether the mechanisms of coercion that have historically worked in the Global South are now being deployed domestically against institutions that refuse to participate in unconstitutional operations.
Anthropic is the institution saying no. Everything depends on whether they can survive saying it.



