- Web
- Mar 02, 2026
Anthropic sues Pentagon over AI restrictions
-
- Web
- 2 Minutes ago
AI company Anthropic has filed a lawsuit against the US Pentagon after the military placed the company on a national security blacklist.
The Pentagon made this decision after Anthropic refused to remove certain safety rules from its AI system, Claude. These rules prevent the AI from being used for things like autonomous weapons or domestic surveillance of Americans.
Anthropic says the government’s action is illegal and violates its free speech and due process rights under the US Constitution. The company is asking a federal court in California to cancel the designation and stop government agencies from enforcing it.
Anthropic said the government should not punish a company simply because it refuses to change its policies.
Last week, the Pentagon labelled Anthropic a “supply-chain risk.” This type of designation limits or blocks government use of the company’s technology.
According to reports, the military was already using Anthropic’s AI tools in some operations. Officials wanted the flexibility to use the technology for any lawful military purpose, but Anthropic refused to remove restrictions that prevent AI from being used in fully autonomous weapons.
Defence Secretary Pete Hegseth approved the designation after negotiations between the Pentagon and Anthropic broke down.
Shortly afterwards, Donald Trump posted on social media, telling the entire US government to stop using Anthropic’s AI system, Claude.
Anthropic says the current generation of AI is not reliable enough to safely control autonomous weapons.
CEO Dario Amodei explained that the company is not completely against AI weapons in the future, but believes the technology today is too inaccurate and risky.
The company also strongly opposes using AI for mass surveillance of Americans, saying that would violate basic rights.
Anthropic says it still hopes to reach an agreement with the government and does not want a long legal battle.
Anthropic also filed a second lawsuit in Washington, DC. This case challenges a broader supply-chain risk designation that could potentially block the company from working with many other federal agencies, not just the Pentagon.
Government officials will now review how widely the restrictions should apply.
The conflict could hurt Anthropic’s business because the US government is a major customer for AI companies.
Some analysts believe businesses might pause using Claude until the legal dispute is resolved.
Meanwhile, the Pentagon has continued working with other AI companies. The Defence Department recently signed contracts worth up to $200 million each with several AI labs, including OpenAI, Google, and Anthropic.
After the dispute began, OpenAI quickly announced a deal to provide AI technology for Pentagon networks.