Anthropic sues over Pentagon AI blacklist
March 9, 2026 at 23:11 UTC

Key Points
- Anthropic has filed lawsuits to challenge a Pentagon move labeling it a supply chain risk
- The dispute centers on U.S. demands to allow Claude for any lawful military use
- Anthropic says it refused to enable autonomous weapons and mass surveillance
- President Trump ordered all federal agencies to stop using Anthropic’s AI
Anthropic launches legal fight with U.S. government
Artificial intelligence lab Anthropic has sued the U.S. government, escalating a clash over military use of its Claude AI system. The Amazon (AMZN)-backed start-up filed suit on Monday challenging the Pentagon’s recent decision to officially designate the company a “supply chain risk” and to bar its technology across federal agencies.
Anthropic has brought cases in California, where it is based, and in the U.S. Court of Appeals for the D.C. Circuit. The company argues the Trump administration’s actions are “unprecedented and unlawful” and claims the government is “seeking to destroy” its economic value in retaliation for its usage restrictions.
How the dispute over Claude’s military use began
According to Anthropic’s complaint, the conflict traces back to talks in fall 2025 over the Pentagon’s GenAI.mil platform. The company says it had spent years building Claude into the U.S. government’s most widely deployed frontier AI model, including on classified military networks, and had created a specialized “Claude Gov” version with loosened limits for national security work.
Anthropic alleges the Department of Defense then demanded it abandon its usage policy entirely and allow Claude to be deployed for, in the Pentagon’s words, “all lawful uses.” The company says it agreed broadly but refused on two non-negotiable points: lethal autonomous warfare without human oversight and mass surveillance of Americans.
Anthropic says Claude has not been tested for those scenarios and cannot perform them safely. It says it offered to help the Pentagon transition to another AI provider if the parties could not reach an agreement on those uses.
Competing accounts and Pentagon concerns
Pentagon officials have presented a different origin story for the dispute. The Defense Department’s chief technology officer has said tensions escalated after a U.S. raid in Venezuela, when an Anthropic executive allegedly called a counterpart at Palantir to ask whether Claude had been used in the operation. That claim does not appear in Anthropic’s lawsuit.
The Pentagon argues it needs full AI functionality for “any lawful” use and contends that Anthropic’s limits amount to a private company imposing policy restrictions on matters of warfare. U.S. law defines a supply chain risk in terms of systems that could “sabotage” or “maliciously introduce” unwanted functions.
From ultimatum to federal ban
Anthropic says U.S. Defense Secretary Pete Hegseth met CEO Dario Amodei on February 24 and issued an ultimatum: drop the restrictions within four days or face either compulsion under the Defense Production Act or expulsion from the defense supply chain as a national security risk.
Amodei publicly rejected the demand on February 26, warning of the risks of using untested AI in autonomous warfare and domestic surveillance. Before the February 27 deadline of 5:01 p.m. Eastern, President Donald Trump posted on Truth Social ordering every federal agency to cease all use of Anthropic’s technology.
In his post, Trump described Anthropic as a “RADICAL LEFT, WOKE COMPANY,” while, in a separate context, he has also called the firm’s leadership “left wing nutjobs,” according to reporting cited in coverage of the dispute.
Impact of the supply chain risk designation
The Pentagon’s risk designation effectively bars Anthropic from doing business with federal agencies and could affect its work with contractors and suppliers. Anthropic is the first U.S.-based tech company to receive such a label, which had previously been applied only to foreign firms such as Chinese telecom group Huawei.
Anthropic argues the move penalizes it for insisting AI should be used in “the safest and the most responsible” way and for stating that AI should “maximize positive outcomes for humanity.” The company says even its most advanced models are not reliable enough for automated weapons and that their use in surveillance systems would violate fundamental rights.
Despite the contract fallout, reports say Claude remains embedded in the Defense Department’s operational intelligence systems. U.S. media have reported that Claude was heavily used in planning a recent U.S.-Israel attack on Iran, even as the broader government-wide ban takes effect.
Cancelled contracts and ongoing business
Anthropic had signed a $200 million contract with the Pentagon in July 2025, which was later cancelled after the breakdown in talks last month. At signing, a press release had promoted the deal as advancing “responsible AI in defense operations.”
Following the risk designation, Amodei has said the label has a “narrow scope” and that non-government businesses can still use Anthropic tools for projects unrelated to the Defense Department, as the company seeks to contain commercial fallout.
Key Takeaways
- Anthropic’s lawsuits directly target the legal basis for its risk designation, not just the loss of contracts.
- The dispute highlights a core conflict over who sets boundaries on advanced military AI uses.
- By labeling Anthropic a supply chain risk, the Pentagon has applied a tool previously used only against foreign tech firms to a U.S. company.
- Despite the ban, Claude’s deep integration in defense systems complicates efforts to fully sever government reliance on the technology.
References
- 1. https://ca.finance.yahoo.com/news/explainer-anthropics-case-against-government-202743286.html
- 2. https://finance.yahoo.com/m/b73095af-f9bf-3931-94ec-f030d784dd91/explainer-anthropic%27s-case.html
- 3. https://www.dw.com/en/ai-lab-anthropic-sues-to-block-pentagon-blacklisting/a-76283525
- 4. https://finance.yahoo.com/news/explainer-anthropics-case-against-government-202650234.html
Get premium market insights delivered directly to your inbox.