Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Making AI for Everyone: Multilingual Open-Source AI Prototype Demonstrated at Bharat Mandapam


What Happened

  • At the India AI Impact Summit 2026 (Bharat Mandapam, February 20, 2026), a session titled "Making AI for Everyone: The Case for Personal, Local, Multilingual AI" demonstrated India's prototype for inclusive, locally relevant AI systems.
  • Indian AI companies Sarvam AI and BharatGen unveiled their latest Large Language Models (LLMs) at the Summit, reflecting the outcomes of the IndiaAI Mission launched in 2024.
  • Sarvam AI introduced two advanced models — Sarvam-30B and Sarvam-105B — both trained in India and designed to support multilingual deployments across government, enterprises, and developers.
  • BharatGen unveiled its Param2 17B Mixture-of-Experts (MoE) model, developed in collaboration with NVIDIA, optimised for multiple Indic languages and sectors including governance, education, healthcare, and agriculture.
  • Both Sarvam AI and BharatGen models are being open-sourced via platforms like Hugging Face, making them freely available to Indian and global developers.

Static Topic Bridges

Large Language Models (LLMs) — Technology Fundamentals

Large Language Models are deep learning neural networks trained on vast text corpora to understand and generate human language. They use the Transformer architecture (introduced in 2017 by Google researchers — "Attention Is All You Need"), which enables parallel processing of text through self-attention mechanisms. LLMs are characterised by their parameter count (weights in the neural network): models with more parameters generally have greater language understanding, though efficiency innovations like Mixture-of-Experts (MoE) architectures allow high capability with selective parameter activation.

  • Transformer architecture: introduced 2017; basis for GPT (OpenAI), LLaMA (Meta), Gemini (Google), and Indian models
  • Parameter scale: 30B, 105B parameters = number of learnable weights; larger models require more compute to train and run
  • Mixture-of-Experts (MoE): Architecture where only a subset of model parameters ("experts") are activated for each input token, reducing compute cost while maintaining large model capacity; used in Mistral, Google Gemini, and BharatGen Param2
  • Training on Indic languages: Requires Indic-language text corpora; most global models are English-dominant, causing poor performance in Indian languages
  • Open-source vs proprietary: Open-source models (weights publicly released) allow local deployment, fine-tuning, and customisation; proprietary models (GPT-4, Claude, Gemini) require API access
  • Fine-tuning: Adapting a pre-trained LLM on domain-specific data (e.g., legal, medical, agricultural) for specialised tasks

Connection to this news: The demonstration of Sarvam-30B/105B and BharatGen Param2 at Bharat Mandapam reflects India's transition from AI consumer to AI creator, with open-source multilingual models as the core differentiator for the Global South.

IndiaAI Mission — Seven Pillars and Implementation

The IndiaAI Mission (approved by Union Cabinet, March 2024; Rs 10,372 crore outlay over 5 years) is India's structured national AI programme implemented through seven pillars. It is overseen by the Digital India Corporation under MeitY. The Mission aims to make India a global AI powerhouse by building shared infrastructure, datasets, models, and a skilled workforce — analogous to India's successful Digital Public Infrastructure (DPI) model.

  • Pillar 1 — IndiaAI Compute Capacity: 10,000+ GPU facility through public-private partnerships; provides subsidised compute access to startups and researchers
  • Pillar 2 — IndiaAI Innovation Centre (IAIC): Develops indigenous foundational AI models (BharatGen is the flagship)
  • Pillar 3 — IndiaAI Datasets Platform: Open, high-quality datasets in Indian languages for AI training
  • Pillar 4 — IndiaAI Application Development: Sector-specific AI applications in health, agriculture, education, governance
  • Pillar 5 — IndiaAI FutureSkills: Skilling 1 million+ students in AI; integration with NEP 2020
  • Pillar 6 — IndiaAI Startup Financing: Grants and funding to AI startups; Sarvam AI was selected under this pillar
  • Pillar 7 — Safe & Trusted AI: Responsible AI governance; IndiaAI Safety Institute established under this pillar
  • BharatGen: First government-supported multimodal LLM initiative; under the Department of Science & Technology (DST)

Connection to this news: The multilingual AI prototype demonstrated at Bharat Mandapam is the visible output of IndiaAI Mission Pillars 2 and 3 — the IAIC-backed BharatGen initiative and the Datasets Platform that enables Indic-language training.

AI and India's Language Diversity — Constitutional and Policy Context

India's Constitution (Article 343 to 351) deals with official languages. The Eighth Schedule to the Constitution lists 22 scheduled languages. India has approximately 780 dialects and 22 major languages, with hundreds of millions of citizens who access services only in regional languages. The National Language Translation Mission (NLTM), branded as "Bhashini," was launched in 2021 by MeitY to make digital services accessible in Indian languages. Bhashini provides a common platform for language translation, transcription, and voice interfaces for government services.

  • Article 343: Hindi in Devanagari script is the official language of the Union; English continues in official use
  • Eighth Schedule: 22 scheduled languages (added languages over time through Constitutional Amendments: 21st Amendment 1967 added Sindhi; 71st Amendment 1992 added Nepali, Konkani, Manipuri; 92nd Amendment 2003 added Bodo, Dogri, Maithili, Santhali)
  • Bhashini (National Language Translation Mission): Launched 2021; provides AI-powered language tools in 22+ Indian languages; integrated with government services
  • Digital Divide: Only ~36% of India's internet users browse in English; multilingual AI bridges access for the remaining majority
  • NPCI's UPI: Already supports regional language interfaces — AI extension of this multilingual DPI philosophy
  • Sarvam AI: Claims support for 10 Indian languages in its foundation models; Param2 supports multiple Indic languages

Connection to this news: India's multilingual AI initiative is grounded in the constitutional mandate for language inclusion — making AI accessible in all 22 scheduled languages is both a technological challenge and a Constitutional aspiration, reflecting the "Sarvjan Hitaye" philosophy of the New Delhi Declaration.

Key Facts & Data

  • Summit: India AI Impact Summit 2026, Bharat Mandapam, New Delhi, February 16-21, 2026
  • New Delhi Declaration: 88 nations adopted; philosophy: "Sarvjan Hitaye, Sarvjan Sukhaye"
  • Sarvam AI models launched: Sarvam-30B and Sarvam-105B (trained in India, 10 Indic languages)
  • BharatGen Param2: 17B parameter Mixture-of-Experts (MoE) model; developed with NVIDIA; open-sourced on Hugging Face
  • IndiaAI Mission: Rs 10,372 crore budget; Cabinet approval March 2024; 7 pillars
  • IndiaAI Compute target: 10,000+ GPUs through public-private partnerships
  • BharatGen: First government-supported multimodal LLM (under DST)
  • Bhashini (NLTM): Launched 2021; AI language platform for 22+ Indian languages
  • India's Eighth Schedule: 22 scheduled languages
  • India's internet users browsing in non-English: approximately 64%
  • Sarvam AI: Bengaluru-based; selected under IndiaAI Mission 2025