What Happened
- In the context of India's AI Impact Summit 2026, a leading expert on AI policy argued that the global AI governance debate has matured — shifting from the earlier question of "should we use AI?" to the more sophisticated question of "who is accountable when AI causes harm?" — and that this progression represents meaningful policy progress.
- India unveiled its AI Governance Guidelines alongside the summit, a framework built on seven guiding principles (sutras) including Trust as Foundation, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design, and Safety, Resilience and Sustainability.
- Crucially, India's guidelines do not envisage a standalone AI law in the medium term, taking the position that existing sectoral legislation is adequate and that premature hard regulation risks stifling innovation.
- New MeitY rules effective from 14 February 2026 require social media platforms to remove flagged AI-generated content (deepfakes, misinformation) within three hours — one of the most operationally specific AI accountability measures yet implemented in India.
- The expert's central argument: The institutionalisation of the "accountability" question — through regulatory frameworks, liability mechanisms, and audit requirements — is a structural step forward regardless of whether specific rules are perfect.
Static Topic Bridges
Algorithmic Accountability and AI Liability: Conceptual Framework
Algorithmic accountability refers to the requirement that AI systems and their developers/deployers be answerable for the outcomes those systems produce. This encompasses several dimensions: transparency (can the decision-making process be explained?), explainability (can a user understand why a specific decision was made?), contestability (can affected individuals challenge AI decisions?), and liability (who bears legal responsibility for AI-caused harm?). In the EU AI Act framework, high-risk AI systems — those affecting health, employment, education, credit, or public safety — are subject to mandatory conformity assessments, human oversight requirements, and documentation obligations.
- EU AI Act (2024): First binding AI regulation globally; risk-based tiering
- Unacceptable risk: AI social scoring, real-time facial recognition in public — prohibited
- High risk: Biometrics, critical infrastructure, education, employment, public services — strict rules
- Limited/minimal risk: Chatbots, spam filters — transparency obligations only
- India's approach: No standalone AI law (medium term); relies on IT Act 2000 + Digital Personal Data Protection Act 2023 + sectoral regulators
- MeitY's 3-hour deepfake takedown rule (Feb 2026): Operationalises platform accountability for AI-generated content
- Concept of "meaningful human control": Requirement that AI decisions affecting individuals retain a human review mechanism
Connection to this news: The expert's argument that moving to the accountability question is "progress" reflects the transition from philosophical debate to institutional design — which is where UPSC Mains questions on AI governance are increasingly directed.
India's Digital Governance Architecture and AI Regulation
India's approach to AI regulation operates within its existing digital governance architecture. The Information Technology Act, 2000 (amended 2008) regulates intermediaries and cybercrime. The Digital Personal Data Protection (DPDP) Act, 2023 establishes data principal rights and data fiduciary obligations — directly relevant to AI systems that process personal data. The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 govern social media platforms and were the basis for the 3-hour AI content removal rule. SEBI, IRDAI, and RBI are developing sector-specific AI use guidelines for financial services.
- IT Act 2000: Foundational digital law; Section 66A struck down (2015, Shreya Singhal case); Section 69A upheld
- DPDP Act 2023: Notified August 2023; Data Fiduciary obligations, Data Principal rights, Data Protection Board
- IT Rules 2021 (Intermediary Guidelines): Safe harbour for platforms conditional on compliance; due diligence obligations
- AI Safety Institute (India): Proposed institutional mechanism for AI risk evaluation — analogous to UK AI Safety Institute (2023)
- AI Governance Group: Proposed under India's AI Governance Guidelines for cross-ministerial coordination
- Personal Data Protection vs AI governance: Tension between data-hungry AI training and privacy rights
Connection to this news: India's layered regulatory approach — using existing laws rather than a new AI Act — means that the accountability framework is embedded across multiple instruments, making the system harder to navigate but potentially more adaptive.
Deepfakes and AI-Generated Content: Regulatory Challenges
Deepfakes — AI-generated or manipulated audio-visual content that falsely depicts real individuals — represent one of the most immediate harms of generative AI. They raise issues of defamation, electoral manipulation, non-consensual intimate imagery, and financial fraud. India's MeitY took a significant step in February 2026 by requiring platforms to remove deepfake content within 3 hours of reporting — one of the strictest timelines globally. This builds on earlier IT Rules 2021 guidance that platforms must proactively detect and remove morphed images of identifiable individuals.
- Deepfake: AI-generated synthetic media where a person's likeness is superimposed or manipulated
- India's 3-hour removal rule (Feb 2026): Among the world's fastest mandatory AI content takedown timelines
- EU AI Act: Classifies real-time remote biometric identification in public spaces as unacceptable risk (banned)
- Challenges for platforms: Detection of deepfakes at scale requires counter-AI tools; false positive risk
- Electoral deepfakes: Election Commission of India's MCC provisions require factual accuracy; deepfakes a grey zone
- Watermarking mandate: Some jurisdictions require AI-generated content to carry digital watermarks (C2PA standard)
Connection to this news: The 3-hour deepfake rule is a concrete instantiation of "accountability" in action — it assigns responsibility to platforms, sets a verifiable metric, and creates legal liability for non-compliance, embodying the shift the expert describes.
Key Facts & Data
- India's AI Governance Guidelines: Seven sutras including Accountability, Innovation over Restraint, People First
- India's AI regulatory stance: No standalone AI law in medium term; sectoral approach
- MeitY deepfake rule: Remove AI-generated flagged content within 3 hours (effective 14 February 2026)
- EU AI Act: First binding AI law globally (2024); risk-based classification
- IT Act 2000 + DPDP Act 2023 + IT Rules 2021: India's current AI regulatory foundation
- Proposed institutions: AI Safety Institute (India), AI Governance Group
- OECD AI Principles (2019): First intergovernmental AI standard; 42 countries including India
- UNESCO Recommendation on AI Ethics (2021): 193 member states; covers bias, privacy, explainability
- India AI Impact Summit Declaration: Principles of safe, inclusive, equitable AI