Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

AI and the national security calculus


What Happened

  • The controversy over AI integration in US military systems — highlighted by the "Signalgate" episode (March 2025), where senior US officials accidentally included a journalist in a Signal chat group discussing live military strike plans against Houthi targets — has reignited debate about AI use in national security decision-making.
  • A recent analysis argues that the integration of AI into military operations requires "plurilateral commitments" by states to responsible use, rather than leaving governance to bilateral or unilateral frameworks.
  • The 2026 UN deadline for a legally binding treaty on Lethal Autonomous Weapons Systems (LAWS) is under severe strain, with major military powers — the US, Russia, and China — resisting binding international constraints.
  • The Pentagon has requested a record $14.2 billion for AI and autonomous systems research in FY2026, with its "Replicator" programme seeking to deploy thousands of autonomous drones and surface vessels.
  • Analysts warn that if a binding LAWS treaty is not reached by 2026, the pace of military AI development will likely make future regulation obsolete.

Static Topic Bridges

Lethal Autonomous Weapons Systems (LAWS) and International Humanitarian Law

Lethal Autonomous Weapons Systems (LAWS) are weapons systems that, once activated, can select and engage targets without direct human intervention. They represent a fundamental shift in warfare by delegating the "kill decision" — the choice of whom to target — from a human to an algorithm.

LAWS raise acute questions under International Humanitarian Law (IHL), particularly around the principles of distinction (distinguishing combatants from civilians), proportionality (ensuring civilian harm does not exceed military advantage), and precaution (taking all feasible steps to minimise civilian casualties). Scholars argue that an autonomous system cannot exercise the contextual moral judgment these principles require.

  • Governing framework: Convention on Certain Conventional Weapons (CCW), 1980 — the main multilateral forum where LAWS are discussed; discussions began in 2014
  • UN General Assembly resolution (2024): called for negotiating a legally enforceable LAWS agreement by the 7th CCW Review Conference in 2026; 156 nations supported, 5 (including US and Russia) rejected
  • UN Secretary-General's New Agenda for Peace: called for a legally binding treaty prohibiting LAWS from operating without meaningful human oversight
  • IHL core principles: distinction, proportionality, precaution, military necessity, and humanity
  • Key concern: "meaningful human control" — no agreed international definition exists

Connection to this news: The article argues that without plurilateral commitments to responsible AI use in warfare, the absence of human control over lethal decisions creates both strategic instability (risk of unintended escalation) and humanitarian catastrophe. The Signalgate episode illustrated how even human decision-making in AI-enabled real-time war environments can fail catastrophically.

AI in Military Systems: The "Replicator" Programme and Strategic Competition

The US "Replicator" programme, announced in 2023 and receiving $1 billion in 2025, aims to deploy thousands of expendable autonomous drones and unmanned surface vessels within 18–24 months. It reflects the US military's shift toward "attritable" autonomous systems — cheap, expendable, and scalable — as a counter to China's numerical military advantages.

China's state-backed military AI investment is estimated at approximately $15 billion annually, focused on "massive autonomy" — deploying swarms of autonomous systems to overwhelm adversaries. Russia has also integrated AI-enabled targeting systems in its Ukraine operations.

  • US Replicator programme: $1 billion allocation (FY2025); targets deploying "all-domain attritable autonomous" (ADA2) systems
  • Pentagon FY2026 AI budget request: $14.2 billion
  • China's military AI investment: estimated $15 billion annually (US Congressional estimates)
  • US Department of Defense Directive 3000.09 (2012, updated 2023): requires "appropriate levels of human judgment over the use of force" in autonomous weapons — but does not mandate full human control for all systems
  • "Human-on-the-loop" vs "Human-in-the-loop": key governance distinction — whether humans can override (on-the-loop) vs must approve each engagement (in-the-loop)

Connection to this news: The accelerating US-China military AI competition is precisely the dynamic that makes reaching a binding LAWS treaty by 2026 so difficult — the very powers driving military AI development are the ones blocking international constraints.

India's Approach to AI in Defence and Internal Security

India has taken steps toward integrating AI in defence, with the Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA) established in 2019 under the Ministry of Defence. India's National Strategy for Artificial Intelligence (NITI Aayog, 2018) identified defence as a key application domain.

India has maintained a cautious stance on LAWS in international forums, supporting the principle of meaningful human control without endorsing a blanket ban. Domestically, AI tools are increasingly used in surveillance, border monitoring (smart fences along the LoC and IB), and counter-insurgency operations.

  • Defence AI Council (DAIC): policy body headed by the Defence Minister
  • Defence AI Project Agency (DAIPA): implementation arm for AI projects in defence
  • India's position on LAWS: supports "meaningful human control"; has not endorsed a blanket ban
  • NITI Aayog AI strategy (2018): identified defence, agriculture, healthcare, smart cities, and education as priority sectors
  • Smart Fencing Project (Comprehensive Integrated Border Management System — CIBMS): uses AI-enabled sensors, radars, and cameras along the border with Pakistan and Bangladesh
  • India's Integrated Theatre Commands (in-progress): planned restructuring of military commands intended to incorporate AI-enabled joint operations

Connection to this news: As India develops its own AI-enabled defence capabilities while navigating international governance debates, the national security calculus outlined in the article is directly relevant — India must balance military modernisation imperatives with its support for a rules-based international order.

Key Facts & Data

  • Convention on Certain Conventional Weapons (CCW): adopted 1980; LAWS discussions under CCW began in 2014
  • UN General Assembly resolution on LAWS (2024): 156 in favour, 5 opposed (including US and Russia)
  • US Replicator programme funding: $1 billion (FY2025)
  • Pentagon FY2026 AI + autonomous systems budget request: $14.2 billion
  • China annual military AI investment: estimated ~$15 billion
  • US DoD Directive 3000.09 on autonomous weapons: first issued 2012, updated 2023
  • India's Defence AI Council (DAIC) and DAIPA: established 2019
  • "Signalgate" incident: March 13, 2025 — senior US officials accidentally added a journalist to a Signal group chat disclosing live Houthi strike plans
  • IHL principles governing targeting decisions: distinction, proportionality, precaution, military necessity