What Happened
- The government has proposed amendments to the Information Technology Rules linking compliance with IT Ministry advisories to retention of "safe harbour" protection for tech platforms.
- Under the draft rules, non-compliance with advisories or guidelines issued by the IT Ministry would be treated as a failure to meet the conditions for safe harbour — potentially exposing platforms such as Meta, Google, and X to direct legal liability for user-generated content.
- Previously, IT Ministry advisories on issues ranging from deepfake labelling to content takedown practices functioned as guidance without explicit legal consequences.
- The government has also this year compressed the content takedown window to three hours (from 36 hours) for content flagged by authorities, and has introduced new obligations around AI-generated content and deepfakes.
- The consultation period for the draft amendments is open until April 14, 2026.
Static Topic Bridges
Section 79 of the IT Act, 2000 — Safe Harbour for Intermediaries
Section 79 of the Information Technology Act, 2000 provides the foundational "safe harbour" protection for intermediaries in India. Under this provision, an intermediary — defined broadly to include search engines, social media platforms, cloud providers, telecom networks, and online marketplaces — is not liable for any third-party information, data, or communication link made available or hosted by it, subject to certain conditions. Safe harbour protections are premised on the intermediary's role as a neutral conduit rather than a publisher or editor of content.
- Section 79 grants liability immunity for intermediaries who comply with prescribed conditions
- Conditions include: not initiating the transmission, not selecting the receiver, not modifying the content
- Intermediaries lose safe harbour if they have actual knowledge of unlawful content and fail to remove it promptly
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 operationalise Section 79 conditions
- Significant Social Media Intermediaries (SSMIs — platforms with 50 lakh+ registered users) have additional obligations: Grievance Officer, Chief Compliance Officer, Nodal Officer; monthly compliance reports
Connection to this news: The proposed amendment formally expands the conditions for retaining safe harbour to include compliance with government advisories — a significant tightening of the framework beyond what Section 79 currently mandates.
Intermediary Guidelines 2021 and the Regulation of Digital Platforms
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, replaced the 2011 rules and introduced a three-tier grievance redressal structure for digital platforms. These rules imposed traceability obligations (requiring platforms to identify the first originator of messages upon court or government order), content takedown timelines, and mandatory appointment of compliance officers in India. The rules have been legally contested, with the traceability requirement for messaging apps drawing particular scrutiny on privacy grounds.
- 2021 Rules: Part I (intermediaries), Part II (digital news media), Part III (OTT platforms)
- Significant Social Media Intermediaries: must comply within 3 months of being notified
- Grievance Appellate Committee (GAC): a government body to hear appeals against platform content decisions, introduced via 2022 amendment
- Traceability: WhatsApp challenged this provision citing end-to-end encryption; case pending
- 2026 amendment: compresses takedown window to 3 hours; adds AI/deepfake content obligations
Connection to this news: The latest draft amendment continues the trajectory of the 2021 Rules — progressively tightening platform obligations and attaching safe harbour loss as the sanction for non-compliance.
AI-Generated Content, Deepfakes, and Platform Liability
The proposed rules arrive in the context of growing concern about AI-generated synthetic media, particularly deepfakes. The government has already issued advisories requiring platforms to label AI-generated content and to act on deepfake complaints. The new rules institutionalise these advisories as legally binding rather than discretionary. Globally, platform liability frameworks are in flux: the European Union's Digital Services Act (DSA) requires risk assessments and algorithmic transparency from very large online platforms; the United States retains broad Section 230 immunity. India's proposed approach represents a tighter, more state-directed model.
- Deepfake: a synthetic media in which a person's likeness is replaced using AI — constitutes identity fraud and reputational harm
- IT Ministry's earlier advisory: platforms must detect and remove deepfakes within 24 hours of user report
- Section 66E IT Act: punishment for violation of privacy through image capture/publication (relevant to deepfake misuse)
- DSA (EU): imposes due diligence obligations on platforms proportionate to systemic risk
- India's model: compliance with executive advisories as a condition of safe harbour — distinct from the legislative/judicial model in the EU and US
Connection to this news: Elevating advisories on deepfake labelling and AI content to legally binding status is a central driver of the proposed amendment, making platforms directly liable if they ignore government guidance on AI-generated misinformation.
Key Facts & Data
- Proposed rule: non-compliance with IT Ministry advisories = failure of safe harbour conditions
- Current safe harbour basis: Section 79, IT Act 2000; operationalised by IT Rules 2021
- Content takedown window: reduced from 36 hours to 3 hours under 2026 rules
- Consultation period for draft amendments: open until April 14, 2026
- Platforms affected: Meta (Facebook, Instagram, WhatsApp), Google (YouTube, Search), X (Twitter)
- Significant Social Media Intermediary (SSMI) threshold: 50 lakh+ registered users in India
- New obligations: AI-generated content labelling, deepfake detection and removal