- Web Desk
- 1 Minute ago
US to integrate Musk’s Grok AI into classified military systems
-
- Web Desk
- 2 Minutes ago
WEB DESK: The United States Department of Defence has struck a significant agreement with Elon Musk’s artificial intelligence firm xAI to permit its Grok chatbot to operate within the Pentagon’s most sensitive classified systems, according to multiple US media reports.
According to RT News, the contract, confirmed on Monday by defence officials to Axios, would allow Grok to assist with duties ranging from high‑level intelligence analysis and weapons development to battlefield planning. Until now, Anthropic’s Claude model had been the sole AI tool approved for use in such secure environments.
A Fractious Shift in Military AI Policy
The move comes amid escalating tensions between the Pentagon and Anthropic, the San Francisco‑based AI developer behind Claude. Defence officials have reportedly grown frustrated with Anthropic’s refusal to lift certain safeguards on its technology – restrictions designed to prevent Claude from being applied to mass domestic surveillance or fully autonomous weapons systems without human oversight.
In contrast, xAI has agreed to the Pentagon’s demand that its AI be available for “all lawful purposes”, a broad standard that has become central to the Department’s push for more flexible AI tools in defence operations.
Anthropic Under Pressure
Secretary of Defence Pete Hegseth has summoned Anthropic’s co‑founder and chief executive, Dario Amodei, to the Pentagon for what sources expect to be a tense meeting on Tuesday. According to Axios, Mr Hegseth plans to press Anthropic to drop the remaining constraints on Claude or face serious consequences, including a possible designation as a “supply chain risk” – a label traditionally reserved for firms tied to foreign adversaries.
Such a designation could jeopardise Anthropic’s current defence contracts and compel government partners to certify that they do not use Claude in sensitive operations.
Anthropic has defended its position, with a spokesperson saying talks were continuing in “good faith”, even as the company emphasises its commitment to robust safety and ethical guardrails.
Broader AI Competition
Alongside Grok, other generative AI systems from Google and OpenAI have been engaged in discussions with the Pentagon about classified use. Reports suggest that Google’s Gemini model is close to an agreement to operate on secure networks, while OpenAI’s ChatGPT remains further from a deal due to ongoing safety deliberations.
Defence officials have cautioned that replacing Claude in classified systems will be a complex and technically demanding process, given the model’s deep integration into current workflows.
Strategic and Ethical Crossroads
The unfolding dispute highlights a growing tension within US defence policy: balancing national security requirements with ethical considerations around AI deployment. Anthropic’s insistence on maintaining certain safeguards sets it apart in an industry increasingly eager to court lucrative government contracts, even as policymakers push for greater flexibility in the use of advanced technologies.
Whether Grok will fully supplant Claude in top‑secret networks remains uncertain, but the Pentagon’s latest move clearly signals a willingness to diversify its AI toolkit – a strategic pivot that could have long‑term implications for both military operations and the wider tech sector.