The Joint US-Israeli strikes on Iran, codenamed Operation Epic Fury (U.S.) and Operation Roaring Lion (Israel) that reportedly killed Supreme Leader Ali Khamenei, devastated nuclear and missile sites, and caused widespread civilian harm were a stark demonstration of AI’s arrival in live combat.
While much scrutiny has fallen on Anthropic’s Claude being used for intelligence fusion, target selection, and simulations even after President Trump’s ban, the truth is more layered: a diversified array of AI large language models (LLMs) from multiple providers underpinned the operation, allowing the Pentagon to bypass ethical disputes with any one company.
This multi-AI strategy, integrating Claude alongside OpenAI’s GPT-series, Google’s Gemini, xAI’s Grok, and fused platforms like Palantir, explains the seamless execution despite the high-profile fallout with Anthropic.
It also reveals a troubling priority, like the operational resilience and speed over meaningful ethical constraints, accelerating risks of eroded accountability, overconfident escalation, and the normalization of AI in lethal decision-making.
The clash with Anthropic crystallized in late February 2026. CEO Dario Amodei held firm on red lines barring Claude’s use for mass domestic surveillance or fully autonomous weapons without robust human oversight.
The Pentagon, insisting on unrestricted “lawful” applications, labeled Anthropic a supply chain risk, prompting Trump’s order to phase out its tools federally. However, U.S. Central Command (CENTCOM) continued employing Claude-embedded systems for processing vast intelligence (intercepts, satellite imagery, signals), prioritizing targets (including regime leadership), and running battle simulations right through the strikes.
The irony cuts deep when the White House severed ties over safeguards, but military dependence proved too critical to interrupt. Crucially, Claude was never irreplaceable. The DoD had built redundancy through 2025 contracts (up to $200 million each) with OpenAI, Google, and xAI, granting access to their frontier models on classified networks.
OpenAI sealed its deal mere hours after Anthropic’s blacklisting, offering models for intelligence analysis, real-time planning, and decision support, roles that complemented or substituted Claude’s contributions.
Google’s Gemini supports multimodal data summarization and predictive modeling via government channels, while xAI’s Grok provides fewer content restrictions for edge-case reasoning. Palantir’s platforms orchestrate these, creating unified “battlefield digital twins” from multi-LLM outputs to fuse sensors, generate targets, and coordinate strikes involving B-2 bombers, Tomahawk missiles, low-cost LUCAS loitering drones, and cyber elements.
This portfolio approach is tactically brilliant. It guards against vendor lock-in, technical outages, or ethical refusals, ensuring continuity in contested environments. In the Iran operation, AI likely enabled rapid adaptation amid Iranian electronic warfare, optimizing multi-domain precision.
Yet, accountability dilutes in a multi-AI ensemble, errors from one model (e.g., misidentified civilian sites) become hard to trace amid blended outputs. The official “human in the loop” risks becoming nominal as AI dominates analysis.
Escalation risks rises with optimistic AI simulations may inflate success probabilities, emboldening bolder actions like regime decapitation attempts. Ethical erosion accelerates, the Pentagon’s push against Anthropic’s limits, contrasted with OpenAI’s more flexible “all lawful uses” stance, sets a race-to-the-bottom dynamic where companies soften safeguards to win contracts.
The hypocrisy is glaring. Trump condemned Anthropic for caution, yet operations leveraged its tools while pivoting to alternatives. This undermines U.S. credibility on “responsible AI” amid China’s unchecked military AI push.
The February 28 strikes mark warfare’s pivot. The general-purpose AI militarized across providers for resilience, not restraint. Diversification ensures capability without pause, but it embeds algorithms deeper in destruction.
We need urgent recalibration, binding, auditable human oversight across all AI providers; transparent targeting audits; and international norms on autonomous systems before escalation becomes inevitable.
Otherwise, the precision of multiple AI models will exact a steep moral and human price.The munitions that struck Iran were propelled by fuel and guided by diverse AI LLMs. Those models now architect modern conflict, and ignoring their unchecked path endangers us all.
Naorem Mohen is the Editor of Signpost News. Explore his views and opinion on X: @laimacha.

