What Happened
- Global debate over the regulation of military artificial intelligence has intensified, with multiple international institutions and states pressing for binding rules on lethal autonomous weapons systems (LAWS) before they proliferate beyond control.
- The UN Secretary-General has recommended that states conclude, by 2026, a legally binding instrument to prohibit LAWS that function without human control and to regulate other categories of autonomous weapons.
- Under the Convention on Certain Conventional Weapons (CCW), a Group of Governmental Experts (GGE) has been meeting annually to negotiate rules, but has so far only produced non-binding articulations of how existing international humanitarian law (IHL) applies — no new treaty has been agreed.
- A significant geopolitical divide exists: the United States and Russia have argued that existing international law is sufficient, while a coalition of smaller and developing nations has called for an outright prohibition on fully autonomous lethal systems.
- Reports have emerged of technology companies including AI developers seeking contractual assurances from defence clients that their models will not be used for autonomous lethal decision-making without appropriate human oversight.
Static Topic Bridges
Lethal Autonomous Weapons Systems (LAWS) — The Core Concept
Lethal Autonomous Weapons Systems are weapon platforms capable of selecting and engaging targets without meaningful human intervention in the targeting decision. While a universally agreed definition does not yet exist, the ICRC's working definition — systems that autonomously select and engage targets — is the most widely cited reference point. LAWS range from existing systems with partial autonomy (like missile defence platforms) to prospective fully autonomous "killer robots" that can identify and strike targets independently based on AI-driven target recognition.
- No universally agreed definition of LAWS exists in international law — this definitional gap itself is a major obstacle to treaty negotiations.
- Three categories are commonly discussed: (1) Human-in-the-loop — human approves each strike; (2) Human-on-the-loop — human can override but AI acts unless stopped; (3) Human-out-of-the-loop — fully autonomous targeting and engagement.
- Current AI-enabled military applications include target recognition, logistics, cyber operations, and intelligence analysis — not just weapons.
- The UN First Committee passed a resolution on LAWS in 2023 calling for a legally enforceable agreement; only five states voted against, notably the US and Russia.
Connection to this news: The urgency of guardrails stems from the rapid deployment of AI in military contexts globally, with drone warfare in Ukraine and West Asia demonstrating that autonomous target-selection is no longer theoretical.
International Humanitarian Law (IHL) and Autonomous Weapons
International Humanitarian Law — also called the Law of Armed Conflict — governs conduct during armed conflict. It is codified primarily through the Geneva Conventions (1949) and their Additional Protocols (1977). Three core principles are directly implicated by autonomous weapons: Distinction (combatants must distinguish between military and civilian targets), Proportionality (expected civilian harm cannot be excessive relative to military advantage), and Precaution (all feasible precautions must be taken to avoid or minimise civilian harm).
- Autonomous systems raise the question of whether an AI can reliably apply the principle of distinction in complex, dynamic battlefield environments where combatants may not wear uniforms.
- Proportionality assessments require subjective human judgment about anticipated civilian harm versus military advantage — a judgment that AI systems cannot currently make in a legally cognisant manner.
- A critical problem is the "responsibility vacuum": if an autonomous system causes unlawful harm, it is unclear whether criminal or state responsibility falls on the programmer, the commander, or the manufacturer.
- IHL also requires that weapons not cause "superfluous injury or unnecessary suffering" — another standard that autonomous systems may be incapable of reliably applying.
- The Additional Protocol I (1977) introduced the principle of precaution, requiring parties to do everything "feasible" to avoid civilian harm — compliance by autonomous systems is contested.
Connection to this news: The core argument for guardrails is that LAWS structurally cannot satisfy IHL requirements for distinction, proportionality, and precaution as currently understood — making human oversight a legal as well as moral necessity.
India's Position and Domestic Relevance
India has engaged in the CCW GGE discussions but has not taken an extreme position in either direction. India generally supports the view that existing IHL applies to LAWS and has emphasised the importance of national sovereignty in decisions about autonomous weapons use. India's own defence modernisation — including the iDEX (Innovations for Defence Excellence) programme, AI integration in border surveillance, and drone deployment along the LAC and LoC — gives these debates direct domestic relevance.
- India's Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) were established in 2022 to coordinate AI adoption in defence.
- India's drone policy and the Drone Rules 2021 regulate civil airspace; military drone doctrine remains evolving.
- India has not signed the Ottawa Treaty (banning landmines) or the Cluster Munitions Convention — its approach to weapons treaties is shaped by security calculus rather than categorical prohibition.
- For UPSC purposes: India's position is broadly "supportive of human control, opposed to fully autonomous systems, but resistant to binding prohibitions that constrain military modernisation."
Connection to this news: As AI-enabled surveillance and drone systems are increasingly deployed on India's contested borders, the guardrails debate has direct policy implications for how India develops and constrains its own autonomous military capabilities.
Key Facts & Data
- CCW (Convention on Certain Conventional Weapons): Framework under which LAWS negotiations occur; GGE meets annually
- UN Secretary-General's LAWS deadline: Legally binding instrument recommended by 2026
- UN First Committee resolution on LAWS (2023): Passed with only 5 states voting against (including US, Russia)
- IHL core principles relevant to LAWS: Distinction, Proportionality, Precaution (from Geneva Conventions + Additional Protocol I, 1977)
- Three levels of human control: Human-in-the-loop / Human-on-the-loop / Human-out-of-the-loop
- India's defence AI bodies: Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA), both est. 2022
- Key risk: "Responsibility vacuum" — no clear legal accountability when autonomous systems cause unlawful harm
- ICRC definition of AWS: Systems that autonomously select and engage targets