Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

India's top court angry after junior judge cites fake AI-generated orders


What Happened

  • The Supreme Court of India has categorized the citation of AI-generated fake court precedents as a serious form of judicial misconduct — the first time the apex court has escalated AI hallucinations from a technical problem to a conduct issue with disciplinary consequences.
  • The case arose from an August 2025 trial court order in a property dispute (Andhra Pradesh) that dismissed defendants' objections by citing four Supreme Court judgments — all of which were subsequently found to be non-existent, fabricated by a generative AI tool.
  • A bench led by Chief Justice Surya Kant declared: "At the outset, we must declare that a decision based on such non-existent and fake alleged judgments is not an error in the decision-making process. It would be a misconduct and legal consequence shall follow."
  • The court sought responses from the Attorney General, the Solicitor General, and the Bar Council of India, and appointed Senior Advocate Shyam Divan as amicus curiae to examine systemic implications.
  • Earlier, in December 2024, the Bengaluru bench of the Income Tax Appellate Tribunal (ITAT) had issued an order citing four non-existent judgments — a pattern indicating the problem is systemic, not isolated.
  • The Bar Council of India was asked to examine its professional responsibility frameworks to address how advocates and judges interact with AI tools before submissions reach courts.

Static Topic Bridges

Large Language Models (LLMs) — the technology underpinning tools like ChatGPT, Gemini, and others — generate text by predicting the next statistically likely token based on patterns in training data. They do not "know" facts in the way a search engine retrieves them; instead, they produce text that is plausible-sounding but may be factually wrong. When an LLM "hallucinates," it generates authoritative-sounding citations, case names, judgments, or statistics that do not exist. In the legal context, this is especially dangerous because legal reasoning depends on the authority of precedent — a cited case that does not exist provides a false foundation for the entire argument. The phenomenon is not a bug but an inherent feature of how probabilistic language generation works, making human verification non-negotiable.

  • Hallucination: an LLM confidently outputs false information (citations, facts, quotes) in a realistic format
  • LLMs trained on text data have no mechanism to flag when generated content is invented vs. retrieved
  • Legal precedent (stare decisis): courts are bound by earlier decisions — if the cited decision does not exist, the argument collapses
  • Mitigation approaches: Retrieval-Augmented Generation (RAG) — grounding LLM outputs in verified document databases — can reduce hallucination rates
  • India-specific legal databases: SCC Online, Manupatra, Indian Kanoon — verified case law repositories that AI tools should be grounded in

Connection to this news: The trial court judge's reliance on AI-generated case citations without verification demonstrates the specific danger of unchecked LLM use in high-stakes legal contexts. The Supreme Court's misconduct ruling establishes that the duty of verification cannot be delegated to AI.


Judicial Accountability and the Responsibility Framework

The Indian Constitution vests extensive powers in the higher judiciary. Judges of the Supreme Court can be removed only through an impeachment process under Article 124(4) — by a resolution passed by a special majority in both Houses of Parliament after investigation by a committee. For subordinate judiciary, disciplinary proceedings are conducted under the High Court's supervisory jurisdiction (Article 235). The Supreme Court's characterization of AI hallucination reliance as "misconduct" is significant because it activates existing disciplinary frameworks. Misconduct by a judicial officer can lead to censure, suspension, or removal proceedings, and the court's language signals that ignorance of how AI tools work is not an acceptable defence.

  • Article 124(4): removal of Supreme Court judges by impeachment (special majority + investigation)
  • Article 217: removal of High Court judges — similar impeachment procedure
  • Article 235: High Courts exercise control over subordinate courts, including disciplinary jurisdiction
  • All India Judges Association: trade body for judicial officers; interacts with government on service matters
  • In Re: Justice C.S. Karnan (2017): SC imposed a rare contempt and imprisonment on a sitting HC judge — illustrates that judicial immunity is not absolute
  • Amicus curiae: "friend of the court" — senior advocate appointed to assist the court on complex or novel legal questions

Connection to this news: By invoking the language of misconduct, the Supreme Court has set a precedent that professional accountability in the judiciary encompasses technology literacy. Future disciplinary proceedings against judges who rely on unverified AI outputs will likely cite this ruling.


Regulation of Emerging Technologies in Public Institutions

India's regulatory architecture for artificial intelligence is still nascent. The Digital Personal Data Protection Act, 2023 addresses data handling but not AI outputs. The National Strategy for Artificial Intelligence (NITI Aayog, 2018) set a broad vision for responsible AI but lacks enforcement mechanisms. The Information Technology Act, 2000 and its amendments do not directly address AI-generated content or liability for AI errors in professional settings. Globally, the EU AI Act (2024) is the most comprehensive legislation, classifying AI use in legal contexts as "high-risk" requiring human oversight. India's lack of an AI-specific regulatory framework means the judiciary must improvise — relying on existing professional conduct rules and contempt powers to address AI misuse.

  • Digital Personal Data Protection Act, 2023: governs data processing; does not address AI-generated output liability
  • IT Act, 2000 (amended 2008): covers intermediary liability; does not address professional AI use
  • NITI Aayog National AI Strategy (2018): non-binding, aspirational
  • EU AI Act (2024): first comprehensive AI regulation globally; classifies legal AI as high-risk requiring oversight
  • Bar Council of India Rules: govern professional conduct of advocates; being examined for AI-use provisions
  • IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: require platforms to take down misinformation but do not regulate professional AI use

Connection to this news: The Supreme Court's intervention creates de facto regulation of AI in judicial proceedings in the absence of formal legislation — establishing that existing professional conduct and contempt frameworks are available tools, and implicitly calling for legislative clarity on AI use in legal contexts.


Key Facts & Data

  • Incident: Trial court in Andhra Pradesh (August 2025 order) cited four non-existent Supreme Court judgments fabricated by AI
  • Supreme Court ruling: Characterised the citation as "misconduct" with legal consequences (February 2026)
  • Chief Justice: Chief Justice Surya Kant led the bench taking cognizance
  • Amicus curiae appointed: Senior Advocate Shyam Divan
  • Respondents summoned: Attorney General, Solicitor General, Bar Council of India
  • Earlier incident: Bengaluru ITAT bench cited four fabricated judgments (December 2024) — recalled within a week
  • Fictional case cited by Supreme Court bench: 'Mercy vs Mankind' — does not exist in any judicial database
  • Kerala High Court (2025): earlier issued a warning on AI hallucinations in citations
  • Bar Council of India: statutory body under Advocates Act, 1961; regulates legal profession
  • Digital Personal Data Protection Act, 2023: does not address AI-generated legal content liability