What Happened
- The government notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing AI-generated and synthetic content under formal regulatory framework
- Social media platforms such as X, Instagram, and YouTube must now take down unlawful content within three hours of receiving a takedown order from a competent authority or court — reduced from the earlier 36-hour window
- All AI-generated and synthetic content must be clearly and prominently labelled so users can identify it as artificial
- Platforms must embed permanent metadata or provenance markers in synthetic content where technically feasible, enabling traceability across platforms
- The amended rules formally define "synthetically generated information" for the first time and will be legally enforceable from 20 February 2026
- Failure to comply with labelling or takedown timelines would result in loss of safe harbour protection under Section 79 of the IT Act
Static Topic Bridges
Section 79 of the IT Act, 2000 — Intermediary Safe Harbour
Section 79 provides conditional legal immunity to intermediaries (social media platforms, e-commerce sites, internet service providers) from liability for third-party content hosted on their platforms. This safe harbour is the foundational legal protection that enables platforms to operate at scale.
- Section 79(1): An intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by it
- Section 79(2): Conditions for safe harbour — the intermediary must not initiate the transmission, select the receiver, or modify the content; and must observe due diligence prescribed by the Central Government
- Section 79(3): Safe harbour is lost if the intermediary fails to remove unlawful content upon receiving "actual knowledge" (interpreted by the Supreme Court in Shreya Singhal v. Union of India (2015) to mean a court order or government notification)
- The IT (Intermediary Guidelines) Rules, 2021 prescribe the due diligence requirements under Section 79(2)(c)
- Significant Social Media Intermediaries (SSMIs): Platforms with 50 lakh+ registered users must appoint a Chief Compliance Officer, Nodal Contact Person, and Grievance Officer — all resident in India
Connection to this news: Under the 2026 amendment, platforms that fail to label AI content or miss the 3-hour takedown window lose safe harbour protection under Section 79, making them liable as if they were the publishers of the unlawful content.
Deepfake Technology and AI-Generated Synthetic Media
Deepfakes are synthetic media created using artificial intelligence techniques, primarily deep learning and generative adversarial networks (GANs), to produce realistic but fabricated audio, video, or images of real people. The technology has advanced rapidly, making detection increasingly difficult.
- Technical basis: Deepfakes typically use autoencoders or GANs to learn facial features and generate realistic face-swaps or voice clones
- Generative AI: Large language models (LLMs) and diffusion models (e.g., Stable Diffusion, DALL-E, Midjourney) can generate text, images, and video that are difficult to distinguish from human-created content
- Threats: Non-consensual intimate imagery, election misinformation, financial fraud (voice-cloned CEO scams), identity theft, and social manipulation
- Detection challenges: As generation improves, detection accuracy declines; current methods include analysis of facial inconsistencies, blink patterns, and metadata examination
- C2PA (Coalition for Content Provenance and Authenticity): Industry standard for embedding metadata provenance in digital content — supported by Adobe, Microsoft, Google, and others
- Indian context: Several high-profile deepfake incidents in India in 2023-24 prompted calls for regulation; IT Minister had warned platforms about the need for self-regulation
Connection to this news: The mandatory labelling and metadata embedding requirements in the 2026 amendment directly address the deepfake threat by ensuring traceability and user awareness of synthetic content.
Evolution of IT Intermediary Guidelines in India (2011-2026)
India's regulation of digital intermediaries has evolved through multiple iterations, reflecting the growing complexity of the digital ecosystem and emerging threats.
- IT (Intermediary Guidelines) Rules, 2011: First set of due diligence rules under Section 79; required takedown within 36 hours of notice
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Major overhaul — introduced SSMI classification, 72-hour complaint acknowledgement, 15-day grievance resolution, first originator traceability (Rule 4(2)), and Digital Media Ethics Code for news publishers and OTT platforms
- 2023 Amendments: Introduced the concept of government-appointed fact-check unit; added obligations regarding online gaming and misinformation; mandated "reasonable efforts" to prevent prohibited content
- 2026 Amendments: Define synthetic content; mandate 3-hour takedown (reduced from 36 hours); require AI content labelling and metadata embedding; platforms lose safe harbour on non-compliance
- Shreya Singhal v. Union of India (2015): Supreme Court struck down Section 66A (criminalising offensive online speech) and read down Section 79 — "actual knowledge" requires a court order, not mere private complaint
- Proposed Digital India Act: Intended to replace the IT Act, 2000; Bill expected to address AI regulation, platform accountability, and data governance comprehensively
Connection to this news: The 2026 amendment represents the most significant expansion of intermediary obligations since the 2021 Rules, specifically targeting the AI-generated content gap that existing regulations did not address.
Key Facts & Data
- Amended rules notified: 10 February 2026; enforceable from: 20 February 2026
- Takedown timeline: Reduced from 36 hours to 3 hours for content flagged by courts or competent authorities
- SSMI threshold: 50 lakh (5 million) registered users in India
- First formal definition of "synthetically generated information" in Indian law
- Platforms affected: All intermediaries including social media (X, Instagram, YouTube, Facebook), messaging platforms, and content-sharing services
- Consequence of non-compliance: Loss of Section 79 safe harbour — platform becomes liable as publisher
- Shreya Singhal v. Union of India (2015): Struck down Section 66A; interpreted "actual knowledge" under Section 79 as requiring court/government order