The Machine They Cannot Control
How the Pentagon’s war on AI ethics exposed the architecture of American power
The meeting at the Pentagon on Tuesday morning, February 24, was described by sources on both sides as cordial. No voices were raised. Defense Secretary Pete Hegseth reportedly praised Anthropic’s products. CEO Dario Amodei thanked Hegseth for his service to the country. Then Hegseth told Amodei he had until 5:01 PM on Friday to strip the ethical constraints from his company’s artificial intelligence system, or face consequences that included being designated a national security threat on par with Chinese and Russian adversaries.
This is what cordial looks like inside the American war machine.
The dispute, which reached its formal breaking point on Thursday when Anthropic publicly rejected what the Pentagon had called its “final offer,” is described in most American coverage as a contractual disagreement about usage terms. That framing is accurate in the way that describing the Atlantic slave trade as a labor procurement arrangement is accurate. The technical details are correct. The architecture of power being exercised is invisible.
What is actually happening, documented across primary sources including Amodei’s published statement and Axios reporting from inside the Pentagon, is this: the United States military is attempting to compel a private company to build infrastructure for the mass surveillance of American citizens and for weapons systems that can select and kill targets without human authorization. The company is refusing. And the military is now considering invoking a Cold War emergency statute to force compliance.
The Defense Production Act was designed for wartime industrial mobilization. It was used to compel factories to produce ventilators during a pandemic. Its invocation against a software company to force the removal of ethical constraints from an AI system would be, in the assessment of multiple legal experts cited by Defense News and the Associated Press, without precedent under the history of the law. Charlie Bullock, senior research fellow at the Institute for Law and AI, stated the likely outcome plainly: “If neither side backs down, it seems realistic that there would be litigation between Anthropic and the government.”
That litigation has not materialized because Anthropic, as of Friday’s deadline, simply refused. Amodei published his position in a statement addressed to the Department of War: “These threats do not change our position: we cannot in good conscience accede to their request.”
To understand why this matters beyond the commercial dispute, you have to understand what Anthropic is actually refusing to build.
The Pentagon’s stated requirement is that AI vendors agree to allow their models to be used for “all lawful purposes.” The phrase is designed to sound reasonable. The documentation tells a different story.
In his public statement, Amodei cited a 2022 declassified report from the Office of the Director of National Intelligence acknowledging that the Intelligence Community can legally purchase detailed records of Americans’ movements, web browsing, and associations from commercial data brokers without a warrant. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive profile of any person’s life, automatically and at massive scale. The key word in the Pentagon’s contract language is “lawful.” The surveillance Anthropic is refusing to enable is currently legal not because it has been authorized through democratic deliberation, but because existing law has not been amended to restrict practices that became technologically possible only recently.
The second red line concerns autonomous weapons: AI systems that select and engage targets without a human making the final decision to kill. Amodei acknowledged in his statement that even fully autonomous weapons “may prove critical for our national defense,” but argued that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” His company offered to work directly with the Pentagon on research and development to improve reliability. The Pentagon declined the offer.
The Pentagon’s position, stated to CNN by an unnamed official, was that legality is the Pentagon’s responsibility as the end user, not Anthropic’s. The company should provide the tool. The military decides how to use it.
This is the oldest argument made by weapons manufacturers. It has never absolved them of consequences.
The landscape around Anthropic during these negotiations requires context that most American reporting has not provided.
OpenAI, Google, and xAI have agreed to remove their safeguards for use in the military’s unclassified systems and are each working toward deployment in classified environments on the Pentagon’s terms. The White House’s AI and crypto coordinator David Sacks publicly described Anthropic’s position as “woke AI.” Hegseth’s stated goal was to use the pressure on Anthropic to set terms for the entire industry. A senior administration official confirmed the Pentagon is confident the other three companies will agree to the “all lawful purposes” standard.
This is how institutional capture of a technology sector works. You do not need to break every company. You need to break the most resistant one publicly, so that the others understand what refusal costs.
By Thursday, the Pentagon had begun laying the groundwork for the blacklisting consequence by asking defense contractors, including Boeing and Lockheed Martin, to assess their exposure to Anthropic. The mechanism is straightforward: if Anthropic is designated a supply chain risk, every corporation holding a Pentagon contract must certify it does not use Anthropic’s products anywhere in its operations. Anthropic recently stated that eight of the ten largest American companies use Claude. The ripple effect of such a designation would reach into corporate procurement decisions, partnership agreements, and hiring calculations across the entire technology industry. The message being sent to every AI company watching is legible: comply, or we will make you untouchable.
There is a detail in this story that has received less attention than it deserves.
During the Tuesday meeting, Hegseth raised a specific operational complaint. According to Axios reporting, he cited the Pentagon’s claim that Anthropic had raised concerns to its partner Palantir about the use of Claude during what was described as the “Maduro raid.” Amodei denied that Anthropic had raised any such concerns or broached the topic with Palantir beyond standard operating conversations.
The significance of the Palantir reference should not be missed. Palantir Technologies has built its business model on integrating surveillance and military intelligence infrastructure. Its systems are used by American immigration enforcement, military commands across multiple theaters, and intelligence agencies across the Five Eyes alliance. Anthropic’s Claude is deployed through Palantir’s platform into classified environments. The complaint being raised, stripped of its diplomatic framing, is that Anthropic’s safety constraints created friction in a covert military operation against a foreign head of state.
This is what “all lawful purposes” means in practice.
Amodei’s published statement contains a passage the American press quoted but did not analyze sufficiently. He noted that the Pentagon’s threats are “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
This contradiction is not an oversight. It is a coercion structure. The simultaneous threat of blacklisting and forced commandeering is designed to present Anthropic with no viable option except compliance. If the company is a security risk, no serious enterprise will continue using its products, and the company collapses commercially. If the company’s products are essential to national security, the government can force their use under emergency authority. The two threats work together precisely because they are logically incompatible. The target of coercion is not supposed to respond with logic. It is supposed to yield.
Anthropic did not yield.
The company’s Thursday statement was precise on the technical failure of the Pentagon’s “compromise” language: new contract language framed as a compromise was paired with legalese that would allow those safeguards to be disregarded at will. The Pentagon had offered the appearance of a safeguard while preserving the ability to override it. Anthropic read the document and said so publicly.
For readers outside the United States, this dispute carries implications that domestic American framing obscures.
The AI systems being contested are not theoretical. Claude is already deployed in classified American military networks, used for intelligence analysis, operational planning, and cyber operations across active theaters. The debate about whether it will be used for mass domestic surveillance concerns Americans, but the autonomous weapons systems under discussion will not be deployed domestically. They will be deployed in the places where American military operations are currently conducted: the Middle East, the Sahel, the Pacific, and wherever the next classified operations unfold.
The question of who controls the kill decision in an AI-powered weapons system is not abstract for populations who live beneath American air power. It is the difference between a weapon that requires a human being to decide that this person, at this location, at this moment, should die, and a system that makes that determination algorithmically, at scale, without accountability and without the possibility of error correction after the fact.
Amodei acknowledged this directly in his statement: without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that highly trained, professional troops exhibit every day. He is making a technical claim about reliability. The political claim, which the evidence supports, is that the populations most exposed to the consequences of AI-powered warfare have no voice in these negotiations. They are not parties to the contract. They are the operational environment.
The outcome of this specific confrontation remains formally unresolved. The Pentagon has made threats it may or may not execute. The legal basis for invoking the Defense Production Act to strip safety constraints from software is, by the assessment of multiple legal scholars, deeply uncertain. Experts have stated that such a move would be without precedent under the history of the law, and that future litigation over a potential order would not likely favor the government.
But the deeper outcome has already been recorded.
The American national security establishment, under the current administration, has placed a principle on the table: private companies that develop AI do not own their ethical constraints. The military is the end user. The military determines legality. The military decides what the tool is for.
OpenAI accepted this. xAI accepted this. Google is, according to Pentagon sources, close to accepting this.
One company, so far, has not. And the machine designed to compel compliance is now in motion against it.



