Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

SC flags 'menace' of AI-generated fake judgments, cautions lawyers


What Happened

  • The Supreme Court of India flagged the growing "menace" of lawyers and litigants citing AI-generated, non-existent judgments in court proceedings, calling it a serious threat to judicial integrity.
  • Justices Rajesh Bindal and Vijay Bishnoi issued strong cautions to the legal fraternity about the unverified use of AI tools such as ChatGPT for legal research.
  • A High Court had already flagged that submissions by an appellant were generated using ChatGPT — including a cited judgment that had no real-world citation.
  • The Supreme Court has served notice to the Attorney General and the Bar Council of India, framing the submission of fabricated AI-generated precedents as a form of professional misconduct.
  • In an earlier incident, Justice B.V. Nagarathna encountered a reference to a fictitious case titled "Mercy v. Mankind" during a PIL hearing. A Bengaluru bench of the Income Tax Appellate Tribunal also recalled an order that had cited three non-existent Supreme Court judgments generated via ChatGPT.

Static Topic Bridges

AI Hallucination in Generative AI Systems

Generative AI models like large language models (LLMs) produce text by predicting statistically probable outputs from their training data — they do not access live legal databases or verified repositories. This creates the phenomenon of "hallucination," where the model generates plausible-sounding but entirely fictitious case names, citations, and legal reasoning. In the legal domain, this is particularly dangerous because hallucinated precedents mimic the style and format of real judgments closely enough to go undetected without source verification.

  • AI hallucination is not a deliberate forgery — it is a technical limitation of probabilistic text generation.
  • LLMs trained on legal corpora frequently confabulate case names by combining real party names, jurisdictions, and dates from their training data.
  • Tools like ChatGPT, Bard, and Gemini are general-purpose and do not have citation verification built in; dedicated legal AI tools (e.g., LexisNexis AI, Thomson Reuters CoCounsel) use RAG (Retrieval-Augmented Generation) to reduce but not eliminate hallucination.
  • The Bar Council of India Rules (Part VI, Chapter II, Rule 3) mandate an advocate's absolute duty to the court, making the submission of false citations a professional misconduct under the Advocates Act, 1961.

Connection to this news: The Supreme Court's escalation of AI fake citations from a technical error to a conduct issue directly invokes the BCI's disciplinary framework, marking the first time India's apex court has framed AI misuse as professional misconduct with legal consequences.

The Indian Judiciary's Technology Adoption Framework

India's Supreme Court has its own technological initiatives, including the eSCR (electronic Supreme Court Reports) portal, the Supreme Court Vidhik Anuvaad Software (SUVAS) for translation, and the SUPACE (Supreme Court Portal for Assistance in Courts' Efficiency) AI tool for assisting judges with case analysis — distinct from the unvetted commercial AI tools that caused the controversy. The National Informatics Centre (NIC) supports the judiciary's digital infrastructure.

  • SUPACE was introduced in 2021 as a judge-facing tool for processing case documents; it does not generate judgments or legal reasoning autonomously.
  • The e-Courts Mission Mode Project (Phase III approved in 2023 with ₹7,210 crore outlay) aims to digitise subordinate courts.
  • The SUVAS tool facilitates translation of Supreme Court judgments from English into all scheduled languages.
  • Unlike AI tools that generate new text, SUPACE retrieves and surfaces relevant documents from a curated legal corpus.

Connection to this news: The contrast between the Supreme Court's own carefully designed AI tools (SUPACE, SUVAS) and the reckless use of commercial chatbots by lawyers illustrates the gap between responsible AI integration in governance and unverified AI-assisted advocacy.

Bar Council of India and Advocate Professional Standards

The Bar Council of India (BCI) is the statutory body established under the Advocates Act, 1961, to regulate the legal profession and maintain professional standards. It prescribes duties of advocates toward courts, clients, opponents, and the profession through Part VI of the BCI Rules.

  • BCI Rule 3 (Duty to Court): An advocate shall not influence court decisions through illegal or improper means, including filing false or misleading documents.
  • Violations of BCI standards can lead to suspension or cancellation of the advocate's licence.
  • The Supreme Court issuing notice to the BCI signals that systemic guidance — potentially formal guidelines on AI use in legal practice — may follow.
  • Several international bar associations (UK, USA, Australia) have already issued formal guidelines requiring lawyers to verify AI-generated content before filing.

Connection to this news: The Supreme Court's direction to the BCI underscores that AI governance in legal practice is no longer a technical question but a regulatory one, with the potential for formal rules mandating human verification of AI-assisted filings.


Key Facts & Data

  • The Supreme Court has termed the submission of AI-generated fake citations a form of professional "misconduct" — not merely an error.
  • A Bengaluru Income Tax Appellate Tribunal recalled an order that cited three non-existent Supreme Court judgments and one non-existent Madras HC ruling, all generated via ChatGPT.
  • Notice has been served to both the Attorney General and the Bar Council of India.
  • The Bar Council of India Rules (Part VI, Chapter II, Rule 3) prohibit advocates from influencing courts through improper means.
  • SUPACE (Supreme Court Portal for Assistance in Courts' Efficiency), the SC's own AI tool, is judge-facing and retrieval-based — not generative.
  • The e-Courts Mission Mode Project Phase III has a ₹7,210 crore outlay for court digitisation.
  • AI hallucination is categorised under AI governance as a critical risk domain in India's emerging AI regulatory framework (MEITY's IndiaAI Mission, 2024).