What Happened
- An analysis published ahead of the India AI Impact Summit argues that Asia requires a shared AI governance framework that balances technological progress with inclusive human development, given that the region is experiencing AI-driven transformation unevenly across developed and developing economies.
- The core argument: decisions about AI safety, bias, accountability, and social impact are currently being made predominantly by advanced economies and large technology corporations, while communities in South Asia and Southeast Asia — most affected by AI deployment — have limited voice in governance.
- The article calls for convergence on core values (safety, transparency, inclusiveness, global interoperability) as the foundation of any Asian AI governance compact, while acknowledging the diversity of national regulatory approaches across ASEAN, South Asia, and East Asia.
- India's AI governance posture at the Summit — PM Modi's MANAV framework and the Trusted AI Commons initiative — was positioned as a potential model for developing Asia.
- The analysis situates Asian AI governance needs within a global context where the EU's AI Act (the world's first binding AI regulation, 2024) and the US executive-order-based approach represent contrasting regulatory philosophies.
Static Topic Bridges
India's National AI Strategy and the IndiaAI Mission
India's AI governance approach is anchored in two foundational documents: NITI Aayog's National Strategy for Artificial Intelligence (2018, updated 2021) and the IndiaAI Mission (2023, MeitY-led). The national strategy identified five focus sectors — healthcare, agriculture, education, smart cities and mobility, and smart energy and infrastructure — as priority AI application domains. The IndiaAI Mission operationalises this through seven pillars: Compute Capacity, Foundational Models, Datasets, Application Development, Future Skills, Startup Financing, and Safe & Trusted AI. India's AI governance model is distinctive in linking AI deployment to the country's proven Digital Public Infrastructure (DPI) stack — Aadhaar, UPI, DigiLocker — as a trust layer for AI applications in finance and governance. India AI governance guidelines (2025) articulate principles of transparency, fairness, accountability, and explainability for AI systems deployed in public services.
- NITI Aayog National AI Strategy: "AI for All" — inclusive growth emphasis; 2018 original, updated 2021
- IndiaAI Mission (2023): ₹10,371 crore; MeitY-led; 7 pillars
- MANAV (PM Modi, 2026): Moral, Accountable, Neutral, Accessible, Valid — India's articulated AI ethics principles
- Trusted AI Commons (India AI Impact Summit, 2026): initiative to support development of secure, trustworthy AI systems at national level
- India AI governance guidelines (2025): principles of transparency, fairness, accountability, explainability — sector-specific focus on financial services and governance
- Digital Public Infrastructure (DPI): Aadhaar (biometric ID), UPI (payments), DigiLocker (document vault) as AI trust layer
Connection to this news: India's "AI for All" emphasis and its DPI-grounded governance approach make it a credible voice for a development-oriented Asian AI framework — one that links AI governance to inclusion outcomes, not merely to risk mitigation for advanced-economy users.
Global AI Governance Landscape: EU AI Act vs. Principles-Based Approaches
The European Union's AI Act (entered into force August 2024) is the world's first legally binding comprehensive AI regulation. It adopts a risk-based approach: classifying AI systems as unacceptable risk (banned), high risk (strict regulation), limited risk (transparency obligations), and minimal risk (no specific obligations). High-risk AI applications — including those used in education, employment, migration, critical infrastructure, law enforcement, and democratic processes — face mandatory conformity assessments, human oversight requirements, and transparency obligations. The US approach has relied on Executive Orders (EO 14110, October 2023) and sector-specific agency guidelines rather than comprehensive legislation. Singapore released its Model AI Governance Framework (2019, updated for agentic AI in January 2026) as a principles-based, voluntary framework — widely seen as the Asia-Pacific model for industry self-regulation.
- EU AI Act: entered into force August 2024; fully applicable from August 2026; risk-based classification (4 tiers)
- Prohibited AI: social scoring by governments, real-time biometric surveillance in public spaces (with exceptions), manipulative AI, emotion recognition in workplaces/schools
- US Executive Order on AI (EO 14110, October 2023): focused on safety, security, equity; revoked in January 2025 but sector guidance remains
- Singapore Model AI Governance Framework: 2019 (general); updated January 2026 (agentic AI); voluntary; widely adopted in ASEAN
- G7 Hiroshima AI Process (2023): international principles for advanced AI; 11 guiding principles adopted
- India, unlike the EU, has not enacted a standalone AI law; relies on sector-specific guidelines and existing IT Act 2000 framework
Connection to this news: The article's call for an Asian AI framework implicitly navigates between the binding EU model (seen as regulatory overreach by developing economies) and the permissive US model — advocating for a principles-based Asian compact that preserves regulatory sovereignty while establishing minimum interoperability standards.
AI and Inclusive Human Development in Asia
UNDP's Human Development Index (HDI) framework recognises technology as an enabler of human capabilities — health, education, and standard of living. AI's potential to accelerate human development in Asia is significant: AI-driven precision agriculture could address food security for 650 million smallholder farmers; AI diagnostics could extend quality healthcare to underserved populations; AI-enabled adaptive learning could personalise education at scale. However, these benefits are conditional on equitable access to compute infrastructure, training data that represents diverse Asian languages and contexts, and digital literacy. The digital divide in Asia is stark: while South Korea, Japan, and Singapore rank among global AI leaders, countries like Bangladesh, Myanmar, and Nepal face critical gaps in AI readiness — compute access, data infrastructure, and skilled talent. Without a governance framework that explicitly addresses capacity building and data equity, AI risks deepening existing development inequalities.
- Asia-Pacific accounts for ~4.5 billion people (~57% of global population); AI governance decisions here have global implications
- India's AI readiness: strong in software talent (~5 million IT professionals) but weak in compute hardware and AI-specific regulation
- Digital divide: ITU data shows 37% of world's offline population is in South/Southeast Asia
- AI Workforce Development Playbook (India AI Impact Summit, 2026): initiative for AI skilling and literacy across developing Asia
- ASEAN AI Governance Framework (2019): voluntary; focuses on transparency and human-centricity for member states
- China's Global AI Governance Initiative: proposes multilateral AI governance with emphasis on national sovereignty and development rights
Connection to this news: The article's central argument — that a shared framework must translate technological progress into inclusive human development — directly engages the development-versus-safety tension in AI governance, which is the defining challenge for Asia's approach to AI regulation.
Key Facts & Data
- EU AI Act: entered into force August 2024; fully applicable August 2026; world's first binding comprehensive AI law
- Singapore Model AI Governance Framework: January 2026 update (agentic AI); voluntary, principles-based
- G7 Hiroshima AI Process: 11 guiding principles for advanced AI (2023)
- IndiaAI Mission: ₹10,371 crore; 7 pillars; MeitY-led (2023)
- MANAV (India's AI ethics framework): Moral, Accountable, Neutral, Accessible, Valid
- India IT Act, 2000: existing legal framework under which AI-related guidance is issued (no standalone AI law)
- Asia-Pacific population: ~4.5 billion (~57% of global); 37% of world's offline population in South/Southeast Asia
- ASEAN AI Governance Framework: 2019; voluntary; human-centricity focus