What Happened
- The IT Ministry notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing binding obligations on AI-generated content
- An earlier proposal (October 2025) requiring AI labels to cover at least 10% of content space was diluted — the notified rules require "prominently" visible labels instead, following pushback from tech companies
- The takedown timeline for problematic content has been drastically reduced from 36 hours to 3 hours
- Platforms must embed permanent metadata or provenance markers into AI-generated content so its origin can be traced even when shared across platforms
- Routine editing, accessibility improvements, and good-faith educational or design work are excluded from the definition of regulated synthetic content
- The rules will come into force on 20 February 2026, the final day of the India-AI Impact Summit
- Intermediaries must deploy automated AI filters to block uploading of child abuse material and non-consensual deepfake intimate imagery
Static Topic Bridges
Intermediary Guidelines and Digital Media Ethics Code Rules, 2021 — Regulatory Framework
The IT Rules 2021, notified under Section 87 read with Section 79 of the IT Act 2000, established a comprehensive framework for digital intermediaries in India. They introduced the concept of "significant social media intermediaries" (SSMIs) — platforms with over 5 million registered users in India — which face enhanced obligations compared to ordinary intermediaries.
- Notified: 25 February 2021; replaced the IT (Intermediary Guidelines) Rules, 2011
- Part II: Due diligence by intermediaries — privacy policy, terms of service, grievance mechanism
- Part III: Additional due diligence for SSMIs — compliance officer, nodal contact person, grievance officer (all resident in India), monthly compliance reports, traceability of first originator (for encrypted messaging platforms)
- Grievance Appellate Committee (GAC): Established under 2022 amendment as a second tier of grievance redressal before courts
- Section 79 of IT Act: Safe harbour protection — conditional on compliance with these rules
Connection to this news: The 2026 amendment adds new obligations specifically for AI-generated content, layering them on top of the existing framework — platforms that fail to comply with the 3-hour takedown and labelling requirements risk losing their safe harbour protection under Section 79.
Artificial Intelligence Content Regulation — Global Approaches
Governments worldwide are grappling with regulating AI-generated content. The challenge lies in balancing innovation, free expression, and protection against misinformation and harmful synthetic media, while ensuring regulations are technically feasible for platforms to implement.
- EU AI Act (2024): Classifies AI systems by risk level — unacceptable, high, limited, and minimal risk; deepfakes classified as "limited risk" requiring transparency (disclosure that content is AI-generated)
- US approach: No federal AI content regulation; state-level laws in California (AB 2839 — election deepfakes, 2024) and Texas
- China: Deep Synthesis Provisions (2023) — require labelling and traceability of AI-generated content; platforms must verify user identities
- India's approach: No standalone AI legislation; regulation through IT Act and IT Rules amendments; proposed Digital India Act (to replace IT Act 2000) still pending
- OECD AI Principles (2019, updated 2024): Recommend transparency and responsible stewardship of trustworthy AI
Connection to this news: India's approach of regulating through amendments to existing IT Rules (rather than a standalone AI law) allows for faster implementation but may lack the comprehensive framework needed for rapidly evolving AI technologies.
Content Provenance and Watermarking — Technical Framework
Content provenance refers to the ability to trace the origin, history, and modifications of digital content. Watermarking and metadata embedding are technical mechanisms to establish provenance — ensuring that AI-generated content can be identified even after multiple shares across platforms.
- C2PA (Coalition for Content Provenance and Authenticity): Industry standard backed by Adobe, Microsoft, Intel, and BBC — uses cryptographic signatures to embed provenance data
- Digital watermarking: Imperceptible marks embedded in content (visible or invisible) — Google's SynthID, OpenAI's image watermarking are examples
- Metadata standards: EXIF data for images, IPTC standards for news media — can be stripped easily, hence cryptographic approaches are preferred
- Challenges: Watermarks can be removed or altered; metadata stripping is common on social media platforms; requires interoperability across platforms
Connection to this news: The rules mandate permanent metadata or provenance markers that cannot be removed or suppressed — implementation will require platforms to adopt interoperable technical standards for content provenance.
Key Facts & Data
- AI label requirement: Must be "prominently" visible; earlier 10% space requirement dropped after industry consultation
- Takedown timeline: 3 hours (down from 36 hours)
- Effective date: 20 February 2026
- SSMI threshold: 5 million registered users in India
- Prohibited categories for automated filtering: Child abuse material, non-consensual intimate imagery
- IT Rules 2021: First notified 25 February 2021; key amendments in 2022, 2023, and 2026
- EU AI Act: Adopted 2024; deepfakes classified as "limited risk" requiring transparency
- Scope exclusions: Routine editing, accessibility improvements, and good-faith educational use