What Happened
- The Supreme Court of India voiced serious concern over the growing practice of citing AI-generated judgments that do not exist in any real legal database, calling it an emerging "menace" to the judiciary.
- A specific case before the High Court involved submissions generated using ChatGPT, including a judgment that had no citation in the real world — the High Court noted this in its order and flagged it to the Supreme Court.
- The Supreme Court issued notice to the Attorney General and the Bar Council of India, categorising the use of fabricated AI-generated precedents as professional misconduct with potential legal consequences.
- Justices hearing the matter stressed that while AI-assisted research is permissible, advocates bear absolute and non-delegable responsibility for verifying every citation before submission.
- The apex court's position escalates AI hallucination in legal practice from a technical oversight to a disciplinary matter for the first time in India.
Static Topic Bridges
Generative AI: How LLMs Work and Why They Hallucinate
Large Language Models (LLMs) are a class of artificial intelligence trained on massive text corpora using a technique called transformer-based deep learning. They predict the next most probable token in a sequence, allowing them to generate coherent, stylistically convincing text — but they have no mechanism for verifying whether the content they generate is factually true, let alone legally accurate. When asked to find a case law citation, an LLM may combine fragments from real cases it was trained on — real party names, realistic-sounding dates, plausible court identifiers — to produce a hallucinated but believable citation.
- Hallucination is an inherent limitation of the current generation of LLMs, not a deliberate design flaw.
- Retrieval-Augmented Generation (RAG) is a technique that connects LLMs to live databases to reduce (but not eliminate) hallucination.
- General-purpose chatbots (ChatGPT, Gemini, etc.) lack real-time access to Indian legal databases such as SCC Online, Manupatra, or the Supreme Court's eSCR portal.
- Several international cases of AI hallucination in legal filings have been documented, including a US federal case (Mata v. Avianca, 2023) where a lawyer was sanctioned after submitting six non-existent cases generated by ChatGPT.
- India's Supreme Court operates SUPACE — a judge-facing AI tool that retrieves documents rather than generating new text — which is distinct from commercial chatbots.
Connection to this news: The Supreme Court's concern directly addresses the gap between how LLMs work (statistical pattern generation) and what legal practice demands (verified, citable precedents) — placing the verification burden squarely on the advocate.
India's Regulatory Framework for AI Governance
India's approach to AI regulation is currently framework-based rather than statute-based. The IndiaAI Mission (approved 2024, ₹10,372 crore outlay) promotes responsible AI development. The Ministry of Electronics and Information Technology (MEITY) has issued advisory guidelines on AI use, including a February 2023 advisory requiring platforms to seek government approval before deploying AI tools that could affect elections or judicial processes.
- India does not yet have a dedicated AI Act (unlike the EU's AI Act, 2024, which classifies AI in judicial processes as "high risk").
- The Digital Personal Data Protection Act, 2023 (DPDPA) regulates data used to train AI but does not specifically address AI-generated misinformation.
- CERT-In (under MEITY) is mandated to address cybersecurity dimensions of AI misuse, including deepfakes and AI-generated disinformation.
- The Parliamentary Standing Committee on Communications and Information Technology has called for an AI regulation framework to address judicial and electoral risks.
- The Supreme Court's intervention could catalyse MEITY or the BCI to issue formal, binding guidelines on AI use in legal practice.
Connection to this news: The absence of a specific AI governance statute in India means the judiciary itself — through notices to the Bar Council and Attorney General — is now filling a regulatory vacuum, making this case a landmark in India's evolving AI policy landscape.
The e-Courts Mission and Judicial Technology in India
India's e-Courts Mission Mode Project, launched under the National e-Governance Plan, seeks to digitise the entire court system from the Supreme Court to subordinate courts. Phase III (approved 2023) allocates ₹7,210 crore for digital infrastructure, AI-assisted case management, and e-filing.
- The Supreme Court's SUVAS tool translates judgments from English into all 22 scheduled languages, making legal knowledge accessible.
- eSCR (electronic Supreme Court Reports) is a free, searchable database of Supreme Court judgments — the authoritative source for case law that AI tools should be cross-referenced against.
- The National Judicial Data Grid (NJDG) provides real-time data on case pendency across all courts.
- Despite these initiatives, India's courts face a pendency crisis with over 5 crore cases pending across all levels (as of 2025).
- Responsible AI integration (retrieval-based tools vs. generative chatbots) is a key governance challenge for the judiciary.
Connection to this news: The e-Courts Mission's emphasis on verified, structured legal data underscores the institutional risk posed by lawyers bypassing authoritative sources in favour of unverified AI-generated content.
Key Facts & Data
- The case before the High Court involved submissions entirely generated by ChatGPT, including a judgment with no real-world citation.
- The Supreme Court has classified AI-generated fake precedent citations as professional misconduct, not merely a procedural error.
- Notice issued to: Attorney General of India and Bar Council of India.
- Mata v. Avianca (USA, 2023): A landmark international precedent where US lawyers were sanctioned for submitting six ChatGPT-generated non-existent cases.
- India's IndiaAI Mission (2024): ₹10,372 crore outlay for responsible AI development and governance.
- SUPACE (Supreme Court AI tool): Judge-facing, retrieval-based — does not generate legal content autonomously.
- EU AI Act (2024): Classifies AI systems used in judicial proceedings as "high risk," requiring human oversight and explainability.
- Bar Council of India Rules, Part VI, Chapter II, Rule 3: Advocates' absolute duty of honesty to the court.