Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Pentagon to adopt Palantir AI as core U.S. military system, memo says


What Happened

  • A March 9, 2026 memorandum from US Deputy Defense Secretary Steve Feinberg formally designated Palantir Technologies' Maven Smart System (MSS) as an official "programme of record" across the entire US military.
  • The designation locks in long-term institutional use of the AI-powered command-and-control platform for intelligence analysis, target identification, and battlefield data fusion.
  • The memo ordered oversight of Maven to be transferred from the National Geospatial Intelligence Agency to the Pentagon's Chief Digital Artificial Intelligence Office within 30 days.
  • Maven Smart System currently has over 20,000 active users across 35 military service and combatant command tools; its user base more than doubled since January 2026 as the system was deployed at scale during US operations in the West Asia conflict.
  • Palantir maintains that humans remain responsible for approving all targeting decisions, and the system does not make autonomous lethal decisions.

Static Topic Bridges

Project Maven and the History of AI in Military Targeting

Project Maven (formally the Algorithmic Warfare Cross-Functional Team) was launched by the US Department of Defense in April 2017 to apply computer vision and machine learning to the analysis of drone surveillance footage and geospatial intelligence. Google was the original vendor but withdrew in 2018 following internal employee protests against developing AI for potential weapons applications. Palantir Technologies subsequently took over and built the Maven Smart System into a comprehensive command-and-control platform. Maven integrates data from satellites, sensors, surveillance aircraft, and human intelligence into a single interface, compressing the "kill chain" — the process from target identification to strike authorisation — from hours to minutes.

  • Project Maven launched: April 2017 by then-Deputy Defense Secretary Robert Work.
  • Google withdrew from Maven in June 2018 after 4,000 employees signed a petition; the episode triggered global debate on technology company ethics in defence contracts.
  • Palantir's Maven Smart System fused nine separate military intelligence platforms into a single interface.
  • During Operation Epic Fury (February–March 2026), the US struck between 5,500 and 6,000 targets in Iran, with the first 1,000 strikes authorised within 24 hours — enabled by Maven's data processing speed.
  • NATO also acquired the Maven Smart System in 2025 for allied military interoperability.

Connection to this news: The formalisation of Maven as a "programme of record" institutionalises AI-assisted targeting across the entire US military — moving AI warfare from experimental deployment to standard doctrine.

Autonomous Weapons Systems: International Law and Ethical Debates

Lethal Autonomous Weapons Systems (LAWS) — machines capable of selecting and engaging targets without meaningful human control — are the subject of active international legal debate under International Humanitarian Law (IHL). The Geneva Conventions and their Additional Protocols require distinction (between combatants and civilians), proportionality (collateral damage not excessive relative to military advantage), and precaution (all feasible measures to avoid civilian harm). Critics argue that AI-accelerated decision-making undermines "meaningful human control," the legal standard required under IHL, even when humans technically approve each strike.

  • The International Committee of the Red Cross (ICRC) has called for legally binding rules on LAWS since 2021, including a prohibition on autonomous targeting of humans.
  • The UN Convention on Certain Conventional Weapons (CCW) has been the forum for LAWS discussions since 2014; no binding treaty has been agreed.
  • The "86 seconds per targeting decision" pace reportedly used in US operations raises serious questions about whether human oversight is substantively meaningful or merely procedural.
  • India has not formally staked out a position on LAWS treaties, though it has participated in CCW discussions.
  • China and Russia have opposed binding prohibitions on LAWS, preferring non-binding political declarations.

Connection to this news: Palantir's insistence that "humans remain responsible" at 20,000+ active users processing battlefield data under combat time pressure illustrates the core tension in the LAWS debate — the line between "human in the loop" and "human on the loop" when AI is operating at machine speed.

AI and National Security: India's Policy Landscape

India has recognised Artificial Intelligence as a strategic priority, establishing the National AI Strategy (NITI Aayog, 2018) and the AI Mission under the Ministry of Electronics and Information Technology. In defence, the Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) were established in 2022 to coordinate AI adoption across the Indian Armed Forces. India's approach emphasises "responsible AI," aligned with its position in international forums on maintaining human control over lethal systems.

  • NITI Aayog's National Strategy for AI (2018): identified 5 focus sectors — healthcare, agriculture, education, smart cities, smart mobility; defence was a secondary application.
  • Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA): established 2022 under the Ministry of Defence to drive AI integration.
  • India's defence AI applications are currently focused on logistics, surveillance data analysis, cyber defence, and decision support — not autonomous targeting.
  • The Indo-US iCET (Initiative on Critical and Emerging Technologies, launched 2023) includes AI as a cooperation area, with implications for joint development of AI tools for defence.
  • India participates in the UN Group of Governmental Experts (GGE) on LAWS under CCW, advocating for human control standards.

Connection to this news: The US-Palantir Maven formalisation sets a precedent and a benchmark that will influence how India and other major militaries approach institutionalising AI in defence operations, particularly the question of where to draw the line between AI-assisted decisions and autonomous ones.

Key Facts & Data

  • Project Maven launched: April 2017, US DoD
  • Google withdrew from Maven: June 2018, following employee petition signed by 4,000 staff
  • Maven Smart System: fuses 9 intelligence platforms; 20,000+ active users across 35 military tools
  • Programme of Record designation: March 9, 2026 (Deputy Defense Secretary Feinberg memo)
  • US operations in West Asia (Feb–Mar 2026): 5,500–6,000 targets struck; first 1,000 in 24 hours using Maven
  • UN Convention on Certain Conventional Weapons (CCW): primary forum for LAWS negotiations since 2014
  • India's Defence AI Council (DAIC) and DAIPA: established 2022
  • India-US iCET (Initiative on Critical and Emerging Technologies): launched January 2023