- Web Desk
- 13 Minutes ago
US Military used Anthropic’s Claude AI in Iran strikes despite Trump ban
-
- Web Desk
- 2 Minutes ago
The US military is reported to have relied on the Claude (AI model) artificial intelligence system developed by Anthropic during strikes on Iran, even though President Donald Trump had just ordered a halt to its use by federal agencies.
According to multiple news outlets, including The Guardian, the AI tool was used in the weekend operation alongside Israeli forces for duties such as analysing intelligence, identifying targets, and running combat simulations – tasks deeply embedded in military planning.
Trump issued the directive to sever government use of Claude and other Anthropic tools hours before the major air campaign began, describing the company as a security risk. His announcement instructed all federal departments to immediately cease using the company’s technology, though the Department of Defense was granted a transitional period of up to six months to phase out its use because many defence systems are tightly integrated with the AI model.
The move was part of a larger dispute that erupted after Anthropic refused Pentagon demands to allow unfettered, unrestricted use of Claude, including for applications related to surveillance or autonomous weapons, citing ethical concerns and its terms of service. That standoff escalated into a public clash between the company and senior US leaders.
As the controversy unfolded, rival AI firm OpenAI announced it had struck a deal with the Pentagon to supply its own technology for classified defence networks, positioning itself as a replacement for Anthropic’s services.