CivilsWisdom.
Updated · Today
Science & Technology April 23, 2026 8 min read Daily brief · #7 of 19

War at machine speed: How AI became a decisive force in US-Israel conflict with Iran

The US military's use of the Maven Smart System — an AI-driven targeting platform built by software company Palantir Technologies and running on the large la...


What Happened

  • The US military's use of the Maven Smart System — an AI-driven targeting platform built by software company Palantir Technologies and running on the large language model Claude developed by Anthropic — has been identified as a decisive force multiplier in the 2026 US-Israel conflict with Iran.
  • Operation Epic Fury, the joint US-Israeli military campaign launched on February 28, 2026, saw Maven help identify and strike approximately 1,000 targets within its first 24 hours; by mid-April 2026, the system had contributed to over 11,000 strikes across Iran.
  • The Department of Defense designated Maven as an official "Programme of Record" — giving it a dedicated Congressional funding stream — with over 25,000 military accounts deployed across all US combatant commands globally.
  • Maven compresses targeting analysis that previously took days or weeks to minutes, by ingesting and synthesizing data from satellites, surveillance drones, radar, and classified intelligence archives through a unified AI pipeline.
  • Critics and investigations have raised serious accuracy concerns: the Pentagon launched an inquiry into whether Maven played a role in a US strike on an Iranian girls' school that killed over 170 people, mostly children, after the AI system reportedly failed to identify the building as a school due to its proximity to an IRGC compound.
  • The system's combat deployment marks a significant threshold: it represents the world's first large-scale use of AI-assisted targeting in a major state-vs-state conflict.

Static Topic Bridges

Artificial Intelligence in Military Applications

Artificial intelligence (AI) refers to machine systems that perform tasks normally requiring human cognition — perception, reasoning, learning, and decision-making. In military applications, AI is being deployed for intelligence analysis, logistics optimization, cyber operations, autonomous vehicles, and, most controversially, targeting assistance in kinetic (lethal) operations.

  • "Project Maven" began in 2017 as a US Department of Defense initiative to apply computer vision and machine learning to drone surveillance footage analysis — the first large-scale military AI programme in the US.
  • By 2026, Maven had evolved into the Maven Smart System (MSS), integrating large language models (LLMs) to synthesize multi-source intelligence and generate target packages with GPS coordinates, weapons recommendations, and automated legal justifications.
  • Palantir Technologies, a US defence and intelligence data analytics company, holds the contract to build and maintain MSS; the system runs on Anthropic's Claude LLM for natural language reasoning tasks.
  • The system has 25,000+ military accounts and operates across all US combatant commands (geographic areas of military responsibility).
  • "Machine speed" targeting refers to the compression of the sensor-to-shooter timeline: where human analysts previously required hours or days to process imagery and recommend targets, AI systems can do so in minutes.

Connection to this news: Maven's role in generating 1,000+ strike targets in the first 24 hours of Operation Epic Fury illustrates how AI has shifted the pace and scale of modern warfare, raising questions about human oversight, accountability, and proportionality under international humanitarian law.

Lethal Autonomous Weapons Systems (LAWS) and International Law

Lethal Autonomous Weapons Systems (LAWS) — colloquially called "killer robots" — are weapon systems that can select, identify, and engage targets without human intervention. They are distinct from remotely operated systems where a human decides each action. The distinction between AI-assisted targeting (where humans retain final authority) and fully autonomous systems (where the machine decides and acts) is legally and ethically critical.

  • The Maven Smart System is designed as a decision-support tool — it recommends targets, but human commanders are supposed to make the final strike authorization. This is described as maintaining "meaningful human control."
  • International humanitarian law (IHL), particularly the Geneva Conventions and their Additional Protocols, requires that military operations distinguish between combatants and civilians (distinction), that civilian harm not be excessive relative to military advantage (proportionality), and that precautions be taken.
  • Critics argue that when AI systems generate hundreds or thousands of targets rapidly, the speed itself undermines meaningful human review — creating de facto autonomous targeting even if humans formally authorize each strike.
  • The UN Group of Governmental Experts on LAWS has been debating a binding treaty to regulate or ban such systems since 2014; no binding international treaty on LAWS exists as of 2026.
  • Human Rights Watch and Amnesty International have called for a categorical ban on weapons systems that operate beyond meaningful human control.

Connection to this news: The Maven system's role in the Iranian girls' school strike — where the AI allegedly failed to identify the building as a civilian structure — is a real-world test case of the proportionality and distinction principles of IHL, and illustrates why LAWS regulation is a critical emerging governance challenge.

India's Position on AI in Warfare and Autonomous Weapons

India has been developing its own AI-for-defence capabilities and has engaged in international forums on autonomous weapons. India's stance at the UN Group of Governmental Experts on LAWS has been cautious: supporting the development of a regulatory framework but not endorsing an outright ban.

  • India's Defence AI Council (DAIC) and the iDEX (Innovations for Defence Excellence) programme have accelerated indigenous AI-for-defence development.
  • India has developed AI applications for border surveillance (Project BOLD-QIT), naval domain awareness, and logistics.
  • The Indian Army has been testing AI-enabled battlefield management systems (BMS) for rapid situational awareness and target coordination.
  • India's position at the UN: human control must be maintained in all lethal decisions; India supports developing "political declarations" rather than a legally binding treaty at present.
  • The US-Iran war's demonstration of AI-speed targeting creates significant pressure on all major militaries — including India — to either acquire comparable capabilities or face asymmetric disadvantage.

Connection to this news: India's defence establishment is closely watching the Maven system's performance and the governance questions it raises, as India develops its own AI-assisted military capabilities under the defence modernization programme.

The "Kill Chain" and Sensor-to-Shooter Integration

The "kill chain" is a military concept describing the sequence of steps from identifying a target to striking it: Find → Fix → Track → Target → Engage → Assess (sometimes abbreviated as F2T2EA). AI systems like Maven primarily compress the Fix, Track, and Target phases by rapidly processing sensor data.

  • Traditional kill chains in complex environments could take hours or days; with AI-assisted systems, this can be reduced to minutes.
  • Maven ingests classified feeds from satellites, surveillance drones, and archived intelligence data, then uses an LLM to synthesize information into prioritized target lists with precise GPS coordinates.
  • The system also generates automated "legal justifications" for each proposed strike — a feature that critics argue creates the illusion of legal compliance while bypassing genuine proportionality assessments.
  • Compression of the kill chain raises the question of "automation bias" — the tendency of human operators to trust and approve AI recommendations without independent verification, especially under time pressure.
  • The school strike investigation centers on whether human commanders had adequate time and information to identify the error before authorizing the strike.

Connection to this news: The speed enabled by the Maven system — 1,000 targets in 24 hours — is only possible because the kill chain has been radically compressed by AI. Whether this compression still allows meaningful human control is the central legal and ethical question raised by the war.

Data Ethics and AI Accountability in Governance

AI accountability refers to the mechanisms by which individuals and institutions can be held responsible for decisions made or influenced by AI systems. In governance and ethics frameworks, this is closely linked to the principles of transparency, explainability, and answerability.

  • When an AI system contributes to a decision that results in civilian deaths, establishing accountability is complex: responsibility may lie with system designers, military commanders, procurement officials, or political leaders.
  • The Pentagon's investigation into whether Maven contributed to the girls' school strike is a significant test case for AI accountability in warfare.
  • Anthropic, the AI company whose Claude model powers Maven, reportedly had no contractual visibility into how its technology was being used for targeting — raising questions about corporate responsibility in dual-use AI.
  • The "accountability gap" — where no single human can be held fully responsible for an AI-assisted decision — is a central concern in international humanitarian law circles.
  • India's National AI Strategy (AIRAWAT) and the Global Partnership on AI (GPAI, of which India is a founding member) both emphasize human-centric AI and accountability frameworks.

Connection to this news: The Maven system's civilian casualty controversy illustrates the accountability gap in AI-assisted warfare — a topic directly relevant to UPSC GS Paper 4 (Ethics: AI governance) and GS Paper 3 (emerging technology and national security).

Key Facts & Data

  • Maven Smart System was built by Palantir Technologies and runs on Anthropic's Claude large language model.
  • During Operation Epic Fury (launched February 28, 2026), Maven helped identify and strike approximately 1,000 targets within the first 24 hours; total exceeded 11,000 by mid-April 2026.
  • The US Department of Defense designated Maven as an official Programme of Record, with over 25,000 military accounts across all combatant commands.
  • The Pentagon launched an investigation into whether Maven contributed to a US strike on an Iranian girls' school that killed over 170 people, mostly children.
  • Maven compresses targeting analysis from days/hours to minutes by integrating satellite, drone, radar, and intelligence data through an AI pipeline.
  • Project Maven began in 2017 as a computer vision programme; by 2026 it had evolved into a full targeting-support system with LLM integration.
  • No binding international treaty on Lethal Autonomous Weapons Systems exists as of 2026; the UN GGE on LAWS has been debating regulations since 2014.
  • India is a member of the Global Partnership on AI (GPAI) and has its own AI-for-defence initiatives under iDEX and the Defence AI Council.
On this page
  1. What Happened
  2. Static Topic Bridges
  3. Artificial Intelligence in Military Applications
  4. Lethal Autonomous Weapons Systems (LAWS) and International Law
  5. India's Position on AI in Warfare and Autonomous Weapons
  6. The "Kill Chain" and Sensor-to-Shooter Integration
  7. Data Ethics and AI Accountability in Governance
  8. Key Facts & Data
Display