Pentagon gives Anthropic deadline to ease AI safeguards


Pete Hegseth has issued a deadline to Anthropic, urging the firm to reconsider safety restrictions on its artificial intelligence systems used by the US military.

WASHINGTON: Pete Hegseth has issued a deadline to Anthropic, urging the firm to reconsider safety restrictions on its artificial intelligence systems used by the US military.

According to Axios, the Pentagon has given Anthropic until Friday to relax safeguards embedded in its models, as part of a broader push to make advanced AI tools more accessible within classified defence environments.

The move reflects growing urgency within the Defence Department to integrate cutting-edge AI into national security operations. Officials have reportedly been pressing major developers, including OpenAI, to adapt their systems for use on secure government networks with fewer operational limits.

Anthropic, however, has resisted these demands, maintaining strict controls on how its models, such as Claude AI, can be deployed, particularly in military contexts, citing safety and ethical concerns.

The standoff highlights a widening divide between Silicon Valley’s emphasis on responsible AI use and the Pentagon’s push for fewer constraints in high-stakes defence applications. Axios previously reported that the Defence Department is considering cutting ties with Anthropic if the company does not comply, raising the possibility of a significant rupture in one of the military’s emerging AI partnerships.

As the deadline approaches, the outcome could shape how AI is governed in national security settings and how far companies are willing to go in balancing safety with state demands.

Separately, the United States Department of Defence has reached a major agreement with Elon Musk’s AI firm xAI to allow its Grok chatbot to operate within some of the Pentagon’s most sensitive classified systems, according to US media reports.

The deal would enable Grok to assist with tasks ranging from high-level intelligence analysis and weapons development to battlefield planning. Until now, Anthropic’s Claude model had been the only AI system approved for such secure environments.

You May Also Like