Artificial intelligence company Anthropic has filed a lawsuit seeking to block the U.S. Pentagon from placing the firm on a national security blacklist, escalating a dispute over how its AI technology can be used in military operations. The case was first reported by Reuters.
The lawsuit, filed Monday in federal court in California, argues that the government’s decision is unlawful and violates the company’s constitutional rights, including protections for free speech and due process.
“These actions are unprecedented and unlawful,” Anthropic said in its court filing. The company asked the court to reverse the designation and prevent federal agencies from enforcing restrictions tied to it.
The conflict began after the Pentagon formally labeled Anthropic a supply-chain risk. The move followed months of negotiations over limitations the company places on the use of its AI systems. According to sources cited by Reuters, the Defense Department objected to restrictions that prevent Anthropic’s technology from being used for autonomous weapons or domestic surveillance.
Dispute highlights tensions over AI and national security
Defense Secretary Pete Hegseth authorized the designation after Anthropic refused to remove those safeguards from its systems, including its flagship AI model Claude.
President Donald Trump later called for federal agencies to stop using the technology, and Axios reported that the White House is preparing an executive order that could formalize the ban across government operations.
Anthropic’s leadership has previously said the current generation of AI systems is not reliable enough for fully autonomous weapons. Chief Executive Dario Amodei has argued that deploying such systems could create significant risks.
Despite the legal challenge, Anthropic said it remains open to negotiations with U.S. officials.
The case could have broad implications for the AI industry, raising questions about how much control governments can exert over private companies developing advanced technologies.
Financial impact and industry reaction
Executives warned in court filings that the Pentagon designation could cause billions of dollars in lost revenue in 2026 and damage the company’s reputation as a trusted partner.
Anthropic officials said the government’s actions had already disrupted major contracts and negotiations. One partner reportedly replaced Anthropic’s Claude model with a competing system, eliminating a potential revenue pipeline worth more than $100 million. Separate negotiations with financial institutions valued at roughly $180 million have also stalled.
Investors and partners have been scrambling to limit the fallout, according to Reuters. Some enterprises may pause deployments of Anthropic’s AI tools until the legal dispute is resolved.
The company has also filed a second lawsuit in the U.S. Court of Appeals for the District of Columbia Circuit challenging a broader designation that could lead to its technology being banned across the civilian federal government.
Support for Anthropic has also emerged within the technology community. A group of 37 engineers and researchers from companies including OpenAI and Google submitted an amicus brief backing the firm. Among them was Google’s chief scientist Jeff Dean.
The group argued that government pressure could discourage open debate about AI safety and limit innovation.
The dispute comes as the U.S. government increases its reliance on artificial intelligence for defense and intelligence operations. In the past year alone, the Pentagon has signed agreements worth up to $200 million each with major AI developers, including Anthropic, OpenAI and Google.
The outcome of the case could shape how future AI systems are governed — and determine whether technology companies or governments ultimately control how these tools are used in military contexts.






