What Happened
- The Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
- The takedown window for harmful synthetic media (deepfakes) has been reduced from 36 hours to just 3 hours, requiring social media intermediaries to act swiftly
- Platforms must now mandatorily label AI-generated content "prominently" — an earlier draft proposal requiring the label to cover at least 10% of the content space was diluted after industry pushback
- Platforms are also required to embed permanent metadata or provenance markers into AI-generated content for traceability
- The amendments narrow the scope of obligations to content "likely to mislead users," reflecting a harm-based approach
- The changes come into force on 20 February 2026, coinciding with the India-AI Impact Summit
Static Topic Bridges
Information Technology Act, 2000 — Safe Harbour Under Section 79
Section 79 of the IT Act, 2000 is India's "safe harbour" provision that protects intermediaries (social media platforms, e-commerce websites, internet service providers) from liability for third-party content hosted on their platforms. However, this immunity is conditional — intermediaries must follow due diligence requirements prescribed by the government, and they lose immunity if they fail to remove unlawful content after receiving a court order or government notification.
- Section 79(1): Intermediary not liable for third-party information, data, or communication
- Section 79(2): Conditions — intermediary must not initiate the transmission, select the receiver, or modify the information
- Section 79(3): Immunity lost upon receiving actual knowledge or government/court notification of unlawful content and failing to expeditiously remove it
- The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 prescribe the due diligence framework under Section 79(2)
- Landmark case: Shreya Singhal v. Union of India (2015) — Supreme Court read down Section 66A; held that intermediary liability under Section 79 requires a court order, not mere private complaint
Connection to this news: The 2026 amendments tighten the due diligence requirements under the IT Rules by mandating faster takedowns (3 hours) and AI content labelling, which intermediaries must comply with to retain their safe harbour protection under Section 79.
IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — Evolution
The IT Rules 2021 replaced the earlier IT (Intermediary Guidelines) Rules, 2011 and introduced a comprehensive regulatory framework for digital intermediaries. They distinguish between "intermediaries" and "significant social media intermediaries" (SSMIs) — platforms with more than 5 million registered users in India, which face additional obligations.
- SSMIs must appoint a Chief Compliance Officer, Nodal Contact Person, and Resident Grievance Officer — all must be resident in India
- Three-tier grievance redressal: Level 1 (platform), Level 2 (Grievance Appellate Committee, established 2022), Level 3 (courts)
- Rule 3(1)(b)(v): Due diligence requires intermediaries to take down content within specified timelines upon government/court order
- April 2023 amendment: Government empowered to identify content as "fake or misleading" regarding government business — this was challenged and put on hold
- October 2023 amendment: Advisory on AI-generated deepfakes was issued but was non-binding
Connection to this news: The February 2026 amendment converts earlier advisories into binding rules, makes AI content labelling mandatory rather than advisory, and drastically tightens the takedown timeline from 36 to 3 hours.
Deepfakes — Technology, Threats, and Regulation
Deepfakes are synthetic media created using deep learning techniques (primarily Generative Adversarial Networks or GANs, and more recently diffusion models) that can generate realistic but fabricated images, audio, and video. They pose serious threats to individual privacy, democratic processes, and national security through misinformation, impersonation, and non-consensual intimate imagery.
- Technology: GANs (introduced by Ian Goodfellow, 2014) and diffusion models (Stable Diffusion, DALL-E, Midjourney) are the primary architectures
- Threats: Political misinformation (election manipulation), non-consensual intimate imagery, financial fraud (CEO impersonation), identity theft
- Global regulation: EU AI Act (2024) classifies deepfakes as "limited risk" requiring transparency obligations; US has state-level laws (e.g., California, Texas)
- India's approach: No standalone deepfake legislation; regulation through IT Act and IT Rules amendments
- Sections 66C (identity theft), 66D (cheating by personation using computer resource), and 66E (violation of privacy) of the IT Act are used against deepfake-related offences
Connection to this news: The 2026 amendment specifically targets deepfakes by mandating prominent AI labels, automated filters for prohibited content categories (child abuse material, non-consensual intimate imagery), and a 3-hour takedown window — making India's regulation among the most time-stringent globally.
Key Facts & Data
- Takedown timeline: Reduced from 36 hours to 3 hours for harmful AI-generated content
- AI labelling: Must be "prominently" visible; earlier 10% space requirement dropped
- Effective date: 20 February 2026
- SSMI threshold: 5 million registered users in India
- Section 79 of IT Act: Safe harbour provision — conditional on due diligence compliance
- IT Rules 2021: First notified 25 February 2021; amended multiple times (2022, 2023, 2026)
- Prohibited categories requiring automated filtering: Child abuse material and non-consensual intimate imagery deepfakes