What Happened
- Doctors and public health experts have raised alarms about an accelerating trend of patients using AI tools — particularly large language models (LLMs) like ChatGPT — for medical diagnosis and self-prescription, bypassing qualified healthcare professionals.
- The WHO has cautioned that LLMs can generate responses that appear authoritative and plausible but may be medically inaccurate, biased, or misleading, posing risks to individual health, equity, and inclusiveness.
- A major Oxford University study (February 2026) — the largest user study of LLMs for medical decisions — found that AI chatbots present significant risks due to their tendency to provide inaccurate and inconsistent medical information; the study concluded that "AI just isn't ready to take on the role of the physician."
- Risks are compounded because training data for medical AI may carry demographic and geographic biases, leading to differential quality of advice across populations — a health equity concern.
- In the United States, regulatory softening is occurring: the FDA released relaxed guidance in January 2026 for clinical decision support (CDS) tools, potentially allowing AI diagnostic tools to reach clinics without full FDA vetting — a development public health experts view as premature.
- India-specific concerns include low health literacy, absence of a specific AI-in-health regulatory framework, and widespread reliance on informal healthcare channels in rural areas, all of which amplify risks of AI-driven self-medication.
Static Topic Bridges
Large Language Models (LLMs) in Healthcare: Capabilities and Limitations
Large Language Models (LLMs) are AI systems trained on vast text corpora to generate human-like responses to queries. In healthcare, they are being used experimentally for symptom checking, patient education, drug information queries, and clinical decision support. However, LLMs are probabilistic text generators — they produce statistically likely next words, not verified medical facts — making them inherently prone to "hallucinations" (confident but false statements).
- LLMs lack real-time access to patient medical records, physical examination data, or diagnostic test results — fundamentals of clinical diagnosis.
- Biases in training data: Medical literature in LLMs disproportionately represents Western demographics; performance on conditions prevalent in South Asia or Africa may be inferior.
- WHO guidance (2023): WHO issued a caution on LLMs in healthcare, urging careful validation before deployment and warning of risks to equity and patient safety.
- A 2026 meta-analysis comparing LLM diagnostic accuracy vs. clinical professionals found LLMs inferior in complex, multi-symptom, or rare-disease scenarios.
- Ethical risks: Privacy (sharing symptoms with commercial AI), over-reliance leading to delayed professional care, risk of drug interactions missed by AI.
Connection to this news: The core concern raised by doctors is that LLMs produce plausible-sounding medical advice that patients act upon without clinical verification — the AI limitation of hallucination has direct health consequences in this domain.
Regulation of Digital Health and AI in India
India's digital health regulatory landscape is still evolving. The National Digital Health Mission (now Ayushman Bharat Digital Mission, ABDM), launched in 2021, creates a health data infrastructure (Health ID, ABHA number) but does not specifically regulate AI diagnostic tools. The Digital Personal Data Protection Act, 2023 governs personal data including health data, but sector-specific AI regulation in healthcare remains absent.
- Ayushman Bharat Digital Mission (ABDM): Established a Health Data Management Policy (2020) and is building a national health data exchange; however, it does not set standards for AI clinical tools.
- The Drugs and Cosmetics Act, 1940, and Medical Devices Rules, 2017, govern physical medical devices; AI software-as-a-medical-device (SaMD) regulation is unclear under current law.
- CDSCO in 2021 published a discussion paper on regulatory framework for AI/ML-based medical devices — formal rules are pending.
- India has no equivalent of the EU AI Act provisions for high-risk AI in healthcare (which require conformity assessments and human oversight mandates).
- National Medical Commission (NMC) guidelines require doctors to prescribe generics and maintain professional accountability, but AI-generated prescriptions fall outside this oversight.
Connection to this news: The absence of a dedicated regulatory framework for AI-in-health in India is a critical gap that the growing trend of self-prescription via AI exploits. Regulatory and legislative action is needed.
Ethics of Technology in Healthcare: Equity and Access Concerns
The deployment of AI in healthcare raises foundational ethical questions beyond individual patient safety: algorithmic bias, data governance, accountability gaps, and widening of health disparities between technologically privileged and disadvantaged populations. The WHO's framework for AI ethics in health (2021) identifies transparency, inclusiveness, accountability, and human oversight as non-negotiable principles.
- Algorithmic bias: AI systems trained predominantly on data from high-income, urban, or specific ethnic groups perform poorly for underrepresented populations — compounding existing health inequities.
- The "digital divide" in healthcare AI: patients with health literacy, internet access, and English proficiency may benefit; marginalised communities may receive inferior AI outputs and lack the critical capacity to question them.
- GS4 angle: AI self-prescription challenges core medical ethics principles — beneficence (acting in patient's best interest), non-maleficence (do no harm), and autonomy (informed consent requires accurate information).
- International frameworks: EU AI Act (2024) classifies medical AI as high-risk, requiring transparency, human oversight, and conformity assessments.
Connection to this news: Beyond individual risk, the proliferation of unregulated medical AI deepens healthcare inequity — better-educated, urban users may cross-check AI advice while vulnerable populations act on it directly.
Self-Medication and Antibiotic Resistance: A Compounding Risk
Self-medication — the use of medicines without professional prescription — is already a significant problem in India. India is one of the world's largest consumers of antibiotics, and antimicrobial resistance (AMR) is a declared public health emergency. AI-powered self-prescription risks amplifying self-medication patterns, particularly inappropriate antibiotic use, accelerating AMR.
- India accounts for the largest share of global antibiotic consumption; WHO has listed AMR as one of the top 10 global public health threats.
- Studies show over-the-counter antibiotic sales (without prescription) remain common in India despite legal restrictions.
- National Action Plan on Antimicrobial Resistance (NAP-AMR) 2017-2021 (extended) coordinates India's AMR response across human health, animal health, and environment.
- AI tools that suggest antibiotic regimens based on symptoms — without culture and sensitivity testing — could directly worsen AMR trajectories.
- Pharmacovigilance programme of India (PvPI) under CDSCO monitors adverse drug reactions but has no mechanism for AI-generated prescription errors.
Connection to this news: The doctors' warning is most urgent in the context of antibiotic overuse; AI-driven self-prescription of antibiotics without professional oversight has population-level consequences for drug resistance.
Key Facts & Data
- WHO caution: LLMs can generate "authoritative-sounding but plausible" medical information that may be inaccurate
- Oxford University study (Feb 2026): Largest user study of LLMs for medical decisions — found significant risks of inaccurate, inconsistent advice
- AI training data bias: Medical AI may reflect demographic biases of source literature, disadvantaging underrepresented populations
- India has no specific AI-in-healthcare regulatory framework (CDSCO discussion paper from 2021 remains unnormalised)
- EU AI Act (2024): Classifies medical AI as high-risk, mandating human oversight
- India antibiotic consumption: World's largest; AMR is a declared public health emergency
- NAP-AMR: India's National Action Plan on Antimicrobial Resistance (2017, extended)
- Digital Personal Data Protection Act, 2023: Governs health data privacy but not AI diagnostic tools
- ABDM (Ayushman Bharat Digital Mission): Digital health infrastructure — does not regulate AI clinical tools