US Government Severs Ties With Anthropic Over AI Safeguard Dispute

A major confrontation between the United States government and one of its leading artificial intelligence suppliers came to a head on Friday, when President Donald Trump directed all federal agencies to immediately halt their use of technology developed by Anthropic, the San Francisco-based AI company behind the Claude chatbot. Defence Secretary Pete Hegseth followed swiftly by designating Anthropic a national security supply chain risk, a classification previously reserved for companies linked to foreign adversaries such as China.
The breakdown capped weeks of increasingly tense negotiations over the terms under which the US military could use Anthropic's AI tools. At the centre of the dispute were two specific concerns raised by Anthropic: it did not want its technology deployed for mass domestic surveillance of American citizens, nor for fully autonomous weapons systems that could operate without human oversight. The Pentagon, for its part, insisted on what it described as the right to use Anthropic's models for any lawful purpose, without being bound by the company's internal guidelines.
Anthropic had been operating under a contract with the Department of Defense worth up to 200 million US dollars, signed in July 2024, which made it the first advanced AI company to have its models deployed on classified government networks. That contract was arranged in partnership with data analytics firm Palantir.
The confrontation came to a head this week when Defence Secretary Hegseth summoned Anthropic chief executive Dario Amodei to Washington for talks, while simultaneously warning the company it could face designation as a supply chain risk or be compelled to comply under the Defense Production Act, a 1950 law that grants the executive branch significant authority over domestic industries during national security emergencies. Amodei stated publicly on Thursday that his company could not in good conscience agree to the Pentagon's demands, and that it would prefer to end its government work rather than remove the safeguards it had placed on its technology.
Trump announced his decision on his Truth Social platform on Friday afternoon, directing every federal agency to immediately cease using Anthropic's products, with a six-month transition period for departments such as the Defence Department that had integrated the tools into classified operations. He also issued a warning that the company faced civil and criminal consequences if it did not cooperate during the phase-out.
Hegseth posted his own announcement shortly afterwards, stating that Anthropic had been formally designated a supply chain risk and that, effective immediately, no contractor, supplier or partner working with the US military would be permitted to conduct any commercial activity with the company.
Anthropic said on Friday evening that it had not received any direct communication from either the White House or the Pentagon about the status of negotiations, and described itself as deeply saddened by the outcome. The company said it would challenge the supply chain risk designation through the courts, arguing it was legally unsound and set a dangerous precedent for any American company engaged in contract negotiations with the federal government. Anthropic also disputed the claim that the designation would prohibit all military contractors from working with the company, arguing that Hegseth did not have the legal authority to extend the ban that far.
The company was unambiguous about its core position. It stated it would not alter its stance on mass domestic surveillance or fully autonomous weapons systems regardless of pressure from the administration.
The dispute drew significant attention across the technology industry. OpenAI chief executive Sam Altman said in a memo to employees, and later publicly, that his company held the same red lines as Anthropic when it came to domestic surveillance and autonomous weaponry. He later confirmed that OpenAI had reached a separate agreement with the Pentagon to deploy its models on classified military networks, under terms he described as showing a deep respect for safety. Altman had previously acknowledged uncertainty about how the Anthropic situation had escalated to this point, but said it had become an issue for the wider industry.
Hundreds of employees at OpenAI and Google signed a petition in support of Anthropic's position within 24 hours of the dispute becoming public. Elon Musk, by contrast, sided with the administration, and the Pentagon confirmed it was planning to give Musk's Grok AI system access to classified military networks.
The financial stakes for Anthropic in the direct government contract are relatively contained. The company was valued at approximately 380 billion US dollars earlier this month based on current revenues and future earnings projections, making the 200 million dollar contract a small fraction of its overall business. Industry analysts noted that the more significant threat was the supply chain risk designation, which could require a large portion of Anthropic's corporate customer base, including companies with existing or prospective defence contracts, to certify they do not use Claude in any work performed for the military.
A former senior official at the Department of Defense, speaking without attribution, suggested Anthropic held a stronger hand than the government's rhetoric implied. The former official described the legal basis for both the Defense Production Act threat and the supply chain risk designation as extremely weak, and noted the company neither needed the revenue nor stood to lose face from the confrontation.
Separately, Tezons analysis found that, behind the scenes, Pentagon officials were reportedly still attempting to offer Anthropic a revised deal even as Hegseth was announcing the designation publicly, highlighting the lack of coordination within the administration's approach.
Amodei, who was an early employee of OpenAI before leaving to co-found Anthropic along with several colleagues following an internal disagreement, described the situation in a statement as one where he believed deeply in using AI to defend democratic nations, but could not support the removal of limits that existed to prevent harm to civilians.
Industry Impact and Market Implications
The fallout from this dispute carries implications that extend well beyond Anthropic and the current administration. For the first time, a major American technology company has been subjected to a designation mechanism that has historically been applied to entities considered instruments of hostile foreign states. If the supply chain risk classification survives legal challenge, it would establish a precedent under which the government could effectively isolate a domestic firm from entire sectors of the market in response to a commercial disagreement over terms of use.
The broader AI industry is now confronted with a question it had not fully anticipated: whether selling AI tools to the government requires accepting that those tools may be deployed without restriction, regardless of the developer's own safety policies. The Pentagon's insistence on unrestricted lawful use sits in direct tension with the safety frameworks that AI developers have spent years building and publicly defending. Companies that have argued safety is central to responsible deployment now face pressure to subordinate that position to government contract requirements.
For enterprise customers of AI companies with defence contracts or aspirations to win them, the supply chain risk designation introduces a new category of commercial risk. If upheld, it would require companies to audit their entire AI supplier relationships before bidding on government work, potentially reshaping how businesses structure their technology stacks.
The speed with which OpenAI reached an agreement with the Pentagon on similar but not identical terms suggests the government is pursuing its AI procurement goals in parallel and is unlikely to slow its acquisition of AI capabilities regardless of the outcome in the Anthropic case. The Pentagon's reported interest in deploying Grok in classified settings introduces a further competitive dynamic, with companies linked to individuals aligned with the current administration potentially gaining commercial advantage.
The dispute also raises a structural question about AI governance. If AI developers are required by contract to remove safety constraints for government use, the incentive to develop and maintain those constraints is weakened. This could influence how AI companies approach safety investment more broadly, particularly those that see government contracts as significant revenue streams.
For Anthropic specifically, the short-term reputational outcome appears favourable within the technology sector, where its refusal to capitulate has drawn wide support. The longer-term question is whether the supply chain risk label, if it withstands legal scrutiny, materially affects its enterprise business development, particularly with large companies that have or seek defence relationships.
















