Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Expert Explains | ‘75 per cent chance Artificial General Intelligence will not succeed’


What Happened

  • Stuart Russell, Professor of Computer Science at UC Berkeley and co-author of the most widely used AI textbook (Artificial Intelligence: A Modern Approach), stated in an interview that there is roughly a 75% chance that Artificial General Intelligence (AGI) will not succeed under the current paradigm.
  • Russell, who directs the Center for Human-Compatible Artificial Intelligence (CHAI) at Berkeley, has been a leading voice on AI safety since 2013 and was named in TIME's 100 Most Influential People in AI (2025).
  • He argues that the current approach of scaling large language models (LLMs) -- which are trained to imitate human text -- is unlikely to produce true AGI, as it lacks genuine understanding, reasoning, and the ability to operate safely in the real world.
  • Russell estimates that AI companies are investing at a scale roughly 25 times larger than the Manhattan Project, yet without adequate safety measures, creating what he terms a civilizational risk.

Static Topic Bridges

Artificial General Intelligence (AGI) vs Narrow AI

Artificial General Intelligence refers to a hypothetical AI system that possesses the ability to understand, learn, and apply knowledge across any intellectual task that a human can perform -- essentially matching or exceeding human cognitive abilities across all domains. This is distinct from Narrow AI (also called Weak AI), which is designed for specific tasks such as image recognition, language translation, or playing chess. Current AI systems, including GPT-class models and other LLMs, are classified as Narrow AI despite their impressive capabilities, because they lack genuine understanding, common-sense reasoning, and the ability to transfer learning flexibly across domains.

  • The term "Artificial General Intelligence" was popularized by researcher Ben Goertzel around 2007 to distinguish from task-specific AI
  • Alan Turing's 1950 paper "Computing Machinery and Intelligence" (published in the journal Mind) proposed the "Imitation Game" (now known as the Turing Test) as a benchmark for machine intelligence -- a human evaluator judges natural-language conversations to determine which participant is human and which is machine
  • No AI system has definitively passed the Turing Test under rigorous conditions, though modern LLMs can sometimes fool human evaluators in short interactions
  • Leading AI researchers disagree on AGI timelines: some estimate 2030-2040, others consider it decades away or fundamentally impossible under current architectures

Connection to this news: Russell's claim that there is a 75% chance AGI will not succeed refers specifically to the current paradigm of scaling LLMs through imitation learning. He contends that even enormous compute investments cannot bridge the gap between pattern-matching and genuine intelligence without a fundamental architectural shift.

AI Safety and the Alignment Problem

The AI Alignment Problem refers to the challenge of ensuring that AI systems pursue objectives that are genuinely aligned with human values and intentions. Stuart Russell's 2019 book Human Compatible: Artificial Intelligence and the Problem of Control articulated three principles for beneficial AI: (1) the machine's only objective is to maximize the realization of human preferences; (2) the machine is initially uncertain about what those preferences are; and (3) the ultimate source of information about human preferences is human behaviour. Russell argues that building AI systems with fixed, pre-specified objectives is fundamentally dangerous because humans cannot perfectly articulate all their preferences -- analogous to King Midas's wish that everything he touched turn to gold.

  • The alignment problem was highlighted by the 2007 US Supreme Court case Massachusetts v. EPA in the environmental context, but in AI it refers to ensuring AI goals match human values
  • Russell proposes "provably beneficial AI" using inverse reinforcement learning, where machines infer human preferences from observed behaviour rather than following fixed goals
  • AI CEOs themselves have estimated a 10-25% chance of catastrophic outcomes from advanced AI systems
  • Yoshua Bengio (Turing Award winner, 2018) has argued that even a 1% chance of civilizational catastrophe from AI is unacceptable
  • Russell testified before the US Senate in September 2023, urging regulation of AI development

Connection to this news: Russell's scepticism about AGI succeeding is closely tied to his safety concerns -- he argues that even if AGI were achievable, the current approach of building increasingly powerful systems without solving alignment first poses an existential risk.

India's AI Policy Framework -- IndiaAI Mission (2024)

The IndiaAI Mission was approved by the Union Cabinet on 7 March 2024 with an outlay of Rs 10,371.92 crore, aimed at building India's AI infrastructure, promoting indigenous AI capabilities, and ensuring responsible AI deployment. The mission is implemented by the Ministry of Electronics and Information Technology (MeitY) and is structured around seven pillars.

  • Seven Pillars: (1) IndiaAI Compute Capacity -- deploying 10,000+ GPUs; (2) IndiaAI Innovation Centre -- developing indigenous Large Multimodal Models; (3) IndiaAI Datasets Platform -- unified access to non-personal datasets; (4) IndiaAI Application Development -- AI solutions for government and critical sectors; (5) IndiaAI FutureSkills -- AI courses and Data/AI Labs in Tier 2 and Tier 3 cities; (6) IndiaAI Startup Financing -- funding for deep-tech AI startups; (7) Safe & Trusted AI -- responsible AI frameworks and self-assessment tools
  • Union Budget 2024-25 allocated over Rs 551.75 crore specifically for IndiaAI Mission activities
  • India does not yet have a dedicated AI regulation law, unlike the EU; the approach has been voluntary guidelines and sectoral regulation
  • NITI Aayog published India's National Strategy for AI in 2018, focusing on AI for social good (#AIForAll)

Connection to this news: Russell's warnings about the risks of unregulated AI development are directly relevant to India's policy choices. While the IndiaAI Mission's "Safe & Trusted AI" pillar addresses responsible AI, India has yet to enact binding AI safety legislation comparable to the EU AI Act.

EU AI Act (2024) -- Global AI Regulation Benchmark

The European Union's Artificial Intelligence Act, which entered into force on 1 August 2024, is the world's first comprehensive legal framework for AI regulation. It adopts a risk-based classification approach, categorizing AI systems into four tiers: unacceptable risk (banned), high risk (stringent compliance required), limited risk (transparency obligations), and minimal risk (unregulated).

  • Prohibited practices (effective 2 February 2025): social scoring by governments, real-time biometric surveillance in public spaces (with exceptions), emotion recognition in workplaces and schools, untargeted facial recognition database scraping
  • High-risk systems (full compliance by 2 August 2026): AI in critical infrastructure, education, employment, law enforcement, migration, justice administration
  • General-Purpose AI (GPAI) models with systemic risk face additional obligations including adversarial testing and incident reporting (effective 2 August 2025)
  • Penalties for non-compliance: up to 35 million euros or 7% of global annual turnover for prohibited AI practices
  • India has not adopted a comparable law; its approach remains principles-based and voluntary

Connection to this news: Russell has consistently advocated for binding government regulation of AI, arguing that companies will not develop safe AGI voluntarily. The EU AI Act represents the kind of regulatory framework Russell supports, while India's current approach of voluntary guidelines contrasts with this model.

Key Facts & Data

  • Stuart Russell is Professor of Computer Science at UC Berkeley and co-author of Artificial Intelligence: A Modern Approach (first published 1995, now in its 4th edition, 2020)
  • He founded the Center for Human-Compatible AI (CHAI) at Berkeley
  • Named in TIME 100 Most Influential People in AI (2025)
  • Testified before the US Senate Judiciary Committee on AI regulation (September 2023)
  • IndiaAI Mission: approved 7 March 2024, total outlay Rs 10,371.92 crore, implemented by MeitY
  • EU AI Act: entered into force 1 August 2024; full applicability by 2 August 2026; first comprehensive AI law globally
  • Turing Test: proposed by Alan Turing in 1950 paper "Computing Machinery and Intelligence" published in Mind (Vol. 59, No. 236)
  • AI investment scale: Russell estimates current AGI-related spending is approximately 25 times the Manhattan Project