What Happened
- Artificial Intelligence is reshaping global power dynamics, warfare, and governance at an accelerating pace, prompting urgent questions about whether humanity can establish adequate checks and balances in time.
- The US-China AI rivalry has intensified, with China's DeepSeek R2 model (released early 2025) demonstrating capabilities comparable to leading Western models at a fraction of the training cost — sending shockwaves through global AI markets and contributing to one of the largest single-day declines in US tech stocks.
- China has announced a strategy to double down on open-source AI to influence global AI infrastructure, while the US continues to restrict chip exports to China under its semiconductor controls.
- The EU AI Act came into progressive force throughout 2025, establishing the first comprehensive binding AI regulatory framework globally — though it faced pushback over its scope and practicalities.
- India unveiled its AI Governance Guidelines in November 2025 under the IndiaAI Mission (MeitY), adopting a lightweight, sector-specific approach rather than comprehensive legislation.
- Analysts warn that dysfunctional international institutions, geopolitical rivalry, and divergent policy priorities are preventing substantive global AI governance cooperation, resulting in a fragmented mosaic of national policies rather than a cohesive international framework.
Static Topic Bridges
The US-China AI Arms Race and Technology Geopolitics
AI has become a central domain of great-power competition. The United States views AI leadership as critical to economic competitiveness and military superiority, while China's 2017 AI Development Plan explicitly targets global AI leadership by 2030. This competition spans semiconductor supply chains, talent acquisition, data access, and the development of autonomous systems.
- US export controls on advanced semiconductors (NVIDIA H100/A100 chips) are designed to slow China's AI development, though DeepSeek's cost-efficient models suggest China is finding workarounds
- DeepSeek's emergence in 2025 challenged the assumption that AI leadership requires massive compute expenditure, disrupting Western tech valuations
- China's open-source AI strategy — making models freely available globally — is a soft-power play to embed Chinese AI infrastructure in global systems
- Middle powers, including India, are positioned to benefit from US tech investment pledges even as they navigate strategic autonomy in AI governance
Connection to this news: The AI surge is not merely a technological phenomenon — it is a geopolitical inflection point. Nations that fail to develop domestic AI capacity risk technological dependence, with implications for economic sovereignty and national security.
AI in Warfare: Autonomous Weapons and the Ethics of Lethal Decision-Making
AI is transforming modern warfare through autonomous weapons systems, AI-enabled surveillance, cyber operations, and decision-support tools for military commanders. Lethal Autonomous Weapons Systems (LAWS) — capable of selecting and engaging targets without human intervention — pose fundamental ethical and legal challenges under international humanitarian law.
- No binding international treaty on autonomous weapons exists as of 2026; discussions continue under the Convention on Certain Conventional Weapons (CCW) framework at the UN
- AI-enabled cyber operations can target critical infrastructure (power grids, financial systems) with plausible deniability
- AI in intelligence analysis (pattern recognition from satellite imagery, signals intelligence) is already deployed by major militaries
- The principle of "meaningful human control" over lethal force is contested — militaries argue speed of AI decision-making may be operationally necessary, while ethicists insist on human accountability
Connection to this news: The article's concern about AI disrupting warfare governance reflects a real regulatory gap. Without international norms, AI-powered military escalation risks outpacing diplomatic frameworks — analogous to how nuclear weapons outpaced arms control in the early Cold War.
EU AI Act and Global Regulatory Divergence
The European Union's AI Act (2024) is the world's first comprehensive, legally binding AI regulatory framework. It classifies AI systems by risk level — from minimal risk (chatbots, spam filters) to unacceptable risk (social scoring systems, real-time biometric surveillance in public spaces) — and imposes obligations on providers and deployers accordingly.
- High-risk AI systems (in healthcare, education, critical infrastructure, law enforcement) require conformity assessments, transparency obligations, and human oversight mechanisms
- The EU AI Act applies extraterritorially — any AI system deployed in the EU market must comply, regardless of where it was developed
- Prohibited uses include AI-based social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), and systems exploiting psychological vulnerabilities
- Full enforcement provisions apply from August 2026
Connection to this news: The EU AI Act represents the regulatory pole of "hard law" governance, while India and the US have chosen lighter-touch approaches. This divergence creates compliance complexity for global AI developers and raises questions about which regulatory model will become the global de facto standard.
India's AI Policy: IndiaAI Mission and Governance Guidelines
India's approach to AI governance, articulated through the IndiaAI Mission (launched 2024) and the AI Governance Guidelines (MeitY, November 2025), prioritises innovation enablement over prescriptive regulation. The framework adopts a sector-specific model, relying on existing regulators (RBI, SEBI, IRDAI, etc.) to govern AI in their respective domains.
- IndiaAI Mission budget: ₹10,371 crore over five years, covering compute infrastructure, datasets, AI applications, skilling, and startup ecosystems
- An AI Safety Institute is envisaged as a central research and risk-assessment body — not a regulator — to advise policymakers and test AI systems
- India's approach explicitly avoids an umbrella AI legislation comparable to the EU AI Act
- Major US tech companies (Google, Microsoft, Amazon) have pledged multi-billion dollar AI investments in India, reinforcing the "AI partner" positioning
Connection to this news: India's lightweight regulatory approach reflects a calculated bet — that excessive regulation could disadvantage domestic AI development at a critical inflection point. The AI Governance Guidelines frame India as a "responsible innovator," but critics question whether voluntary principles are sufficient given the scale of AI risks discussed in the article.
Key Facts & Data
- DeepSeek R2 (2025): Comparable to Western frontier AI models at a fraction of training costs — triggered one of the largest single-day US tech stock drops in history
- EU AI Act: Progressive enforcement from 2025–2026; world's first comprehensive binding AI law; applies extraterritorially to EU market
- IndiaAI Mission: ₹10,371 crore budget; launched 2024 under MeitY
- India AI Governance Guidelines: Released November 5, 2025 — sector-specific, lightweight, no umbrella legislation
- US-China semiconductor war: US export controls on advanced AI chips (A100, H100, H800 series) in force since 2022–2023
- No binding global treaty on autonomous weapons (LAWS) as of 2026; discussions under UN CCW framework ongoing
- Global AI governance described as a "mosaic" — national policies, summit declarations, voluntary commitments — rather than a cohesive international order
- India ranks among the top 10 in government AI readiness (Oxford Insights 2025 index)