AI at war: what the U.S.–Iran AI battlefield means for Pakistan


  • Sadia Basharat
  • Now


The continuing US-led military campaign against Iran, now entering its second week, reflects how rapidly new technologies are reshaping modern warfare. In the opening phase of the conflict, American and Israeli forces carried out an unprecedented wave of strikes, hitting roughly 1,000 targets within the first 24 hours of the operation, according to military officials and analysts.

This pace was made possible not only by cutting-edge hardware but also by the incorporation of radical generative AI into targeting and decision-making cycles. Palantirs Maven Smart System which combines intelligence from satellites drones signals intercepts and other classified sources is at the core of this capability. It includes Anthropics Claude large language model which has been producing suggested target lists allocating exact coordinates prioritising tasks based on strategic importance and even offering real-time post-strike evaluations according to several defense sources.  

Iran’s ability to react effectively is severely limited because military planners can now work at machine speed by condensing what used to take weeks of deliberation into hours or minutes. This is thought to be one of the first widespread applications of generative AI to control kinetic operations in a significant interstate conflict.  

Prior uses such as intelligence summarisation logistics optimisation counterterrorism assistance and even the capture of Venezuelan President Nicolás Maduro in January 2026 have expanded to include direct warfighting under U. S. In S. Central Command. The paradox surrounding the use of the technology however is the most striking feature.  

President Trump ordered federal agencies to phase out Anthropic tools over a six-month period after negotiations broke down just hours before the campaign heated up. The Pentagon prioritised unrestricted operational access while Anthropic had demanded strict limitations on applications like fully autonomous lethal weapons or mass domestic surveillance.  

Mavens Claude integration is still in use in combat despite the public directive and it is considered impractical to abruptly stop. Contingency measures such as possible emergency powers to maintain access until replacements from OpenAI xAI or others are fielded are suggested by reports. This episode has several pressing ramifications for Pakistan. In the first place it speeds up the compression of strategic decision-making time in subsequent crises.  

The ability to combine multi-domain intelligence and produce targeting packages at machine speed greatly expands capability gaps in South Asia where nuclear thresholds and rapid mobilisation risks already define deterrence. The risk of unintentional escalation would increase if adversaries with similar tools were able to close perceived windows of vulnerability much more quickly than diplomacy or conventional forces could respond. Second the conflict between the government and vendors highlights the vulnerability of depending on commercial AI ecosystems in other countries.  

Private-sector ethical safeguards lose power once a capability is deemed mission-critical and states may use coercion or substitution to get around them. Pakistan, long aware of the strategic stakes, has invested in defense-relevant AI to ensure sovereignty in intelligence analysis, predictive modeling, and decision support, all under strict human-in-the-loop control. While generative AI accelerates operations, its biases and errors make meaningful human oversight in targeting indispensable. Pakistan must shape international norms to safeguard human control over lethal force and uphold humanitarian law.

At home, the Gulf conflict fuels anxiety and energy instability.

Islamabad’s careful diplomacy condemning strikes while calling for de-escalation reflects the delicate balance of protecting economic stability, territorial integrity, and strategic ties with Washington, a challenge it has long anticipated through sustained AI preparedness.

Anthropic sues Pentagon over AI restrictions
Author

Sadia Basharat

Sadia Basharat is an Associate Producer at HUM News, with a background in research, editorial coordination, and strategic affairs. She holds an MPhil in Strategic Studies from the National Defence University, Islamabad, and writes on geopolitics, foreign policy, and security issues

You May Also Like