A tightening of the fist in India’s digital public square
The Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Am...
What Happened
- The Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 on 10 February 2026, with the rules entering into force on 20 February 2026.
- The 2026 Amendment Rules significantly expand the state's regulatory reach over digital intermediaries (social media platforms, messaging apps, content-sharing services), raising concerns about free speech, privacy, and press freedom.
- Key provisions require intermediaries to label synthetically generated (AI/deepfake) content, mandate data retention for at least 180 days, and treat social media users who comment on news and current affairs on par with registered news publishers — effectively "publishing liability" for ordinary users.
- Amnesty International, the Internet Freedom Foundation, leading journalist organisations, and human rights groups have called for the immediate withdrawal of the draft second amendment rules, citing threats to free expression, press freedom, and the right to privacy.
Static Topic Bridges
The IT Act 2000 and Intermediary Liability: The Safe Harbour Framework
The legal architecture governing digital speech in India is built on the Information Technology Act, 2000, with intermediary liability being its central pillar.
- Section 79 of the IT Act, 2000: Provides "safe harbour" protection to intermediaries — they are not liable for third-party content hosted on their platforms, provided they act as neutral conduits and comply with due diligence requirements.
- Safe harbour is foundational to the internet economy: without it, platforms would be liable for every user post, making open platforms legally and commercially unviable.
- Section 79's protection is conditional — an intermediary loses safe harbour if it has "actual knowledge" of unlawful content and fails to remove it, or if it "conspires, abets, aids or induces" the unlawful act.
- The Supreme Court in Shreya Singhal v. Union of India (2015) struck down Section 66A of the IT Act (which criminalised "offensive" online speech) as unconstitutional — ruling that online speech enjoys the same Article 19(1)(a) protection as offline speech, and can only be restricted on the grounds listed in Article 19(2).
- Shreya Singhal also held that intermediaries are only required to remove content upon receiving a court order or government notification, not merely upon private complaints.
Connection to this news: The 2026 Amendment Rules create obligations that potentially erode the safe harbour — by making intermediaries' liability contingent on proactive content moderation (labelling, verification, compliance) rather than passive notification-based removal, they transform platforms from passive hosts to active editors, fundamentally altering the safe harbour calculus.
IT Rules 2021: The Foundation for Expanding Regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 replaced the older Intermediary Guidelines Rules of 2011 and established the current regulatory framework.
- Introduced the category of Significant Social Media Intermediaries (SSMIs): platforms with 5 million+ registered users in India face enhanced due diligence obligations (appointing Chief Compliance Officer, Nodal Contact Person, Resident Grievance Officer — all based in India).
- Rule 4(2): SSMIs operating messaging platforms must enable "traceability" — i.e., identify the "first originator" of a message in India when demanded by authorities. This is structurally incompatible with end-to-end encryption (E2EE), raising concerns about privacy and the potential hollowing out of encrypted communication.
- Rule 3(1)(b): Requires intermediaries to act within 36 hours of "actual knowledge" (court order or government notification) to remove specified categories of content — critics argue this timeline forces automated over-removal without meaningful human review.
- The 2021 Rules also brought digital news media and OTT platforms under a three-tier grievance redressal mechanism with a Code of Ethics — a significant expansion of regulation beyond social media.
Connection to this news: The 2026 Amendment Rules build on the 2021 framework, extending its scope to AI-generated content and ordinary user commentary — each successive amendment layer adding to the regulatory burden on platforms and the chilling effect on users.
The Fact Check Unit (FCU) Episode: A Cautionary Precedent
The 2023 IT Rules Amendment introduced a Fact Check Unit (FCU) — a government-designated body whose identification of content as "fake or misleading" would require intermediaries to remove it or face loss of safe harbour. This amendment became a significant free speech controversy.
- The FCU amendment (Rule 3(1)(b)(v), 2023) would have made the government the sole arbiter of online truth — intermediaries faced loss of Section 79 safe harbour if they did not remove content the FCU deemed false about government matters.
- The Bombay High Court struck down the FCU amendment on 20 September 2024 (2:1 majority), holding it violated Article 19(1)(a) and the principles of Shreya Singhal — the government cannot be the judge of its own case on questions of truth.
- The Supreme Court had already stayed the FCU's operationalisation pending judicial review.
- The FCU controversy illustrated the core tension: legitimate concerns about misinformation vs. the risk of executive overreach in determining what counts as misinformation.
Connection to this news: The 2026 Amendment Rules arrive after the FCU was struck down, but civil liberties groups argue the new rules achieve similar chilling effects through different mechanisms — AI content labelling obligations, data retention mandates, and publisher-level liability for ordinary users.
Article 19(1)(a) and Permissible Restrictions on Speech
The right to freedom of speech and expression under Article 19(1)(a) of the Constitution is not absolute — Article 19(2) permits the state to impose "reasonable restrictions" on eight specified grounds.
- The eight grounds under Article 19(2): Security of the state, Friendly relations with foreign states, Public order, Decency or morality, Contempt of court, Defamation, Incitement to an offence, Sovereignty and integrity of India.
- Any restriction must be (a) imposed by law, (b) related to one of the eight grounds, and (c) "reasonable" — proportionate to the objective and not excessive.
- The Supreme Court in S. Rangarajan v. P. Jagjivan Ram (1989) held that freedom of speech cannot be suppressed on the mere possibility of misuse; there must be a proximate causal connection between the speech and the harm.
- Romesh Thapar v. State of Madras (1950): Established that pre-publication censorship (prior restraint) is presumed unconstitutional and requires extraordinary justification.
- Digital regulation critics argue that requirements like proactive content labelling and 36-hour removal windows operate as a form of prior restraint — compelling platforms to self-censor in advance rather than respond to specific harms.
Connection to this news: The core constitutional question raised by the 2026 IT Rules is whether mandatory AI-content labelling, data retention, and publisher-level liability for users meet the "reasonable restriction" test under Article 19(2) — or whether they impose disproportionate burdens that effectively chill protected speech.
AI-Generated Content and the New Regulatory Frontier
The 2026 Amendment Rules introduced obligations specifically targeting AI-generated or "synthetically generated" content — a response to the global phenomenon of deepfakes and AI-generated misinformation.
- Rule 3(3) (2026): Intermediaries must proactively identify, verify, and label synthetically generated content — this goes beyond passive removal to active editorial intervention.
- Critics argue the technical burden of proactive AI content detection is enormous and will inevitably result in over-labelling of genuine content.
- The Internet Freedom Foundation characterised the rules as "digital authoritarianism" — expanding executive control over online speech under the guise of AI safety.
- International precedent: The European Union's AI Act (2024) and Digital Services Act (DSA) also regulate AI content and recommender systems, but with independent oversight bodies rather than executive control.
- India's approach differs by making government the primary enforcement authority rather than an independent regulator — a structural difference with significant implications for press freedom and opposition speech.
Connection to this news: The compelled labelling of AI content, while a legitimate regulatory concern globally, raises particular concerns in India because the enforcement mechanism places discretion in the executive rather than an independent body, creating potential for selective application.
Key Facts & Data
- IT Act, 2000: Section 79 provides safe harbour protection to intermediaries.
- IT Rules, 2021: Introduced SSMIs (5 million+ user threshold), traceability, 36-hour removal timeline, three-tier grievance redressal.
- IT Amendment Rules, 2023: Introduced Fact Check Unit (FCU) — struck down by Bombay HC (September 2024).
- IT Amendment Rules, 2026: Notified 10 February 2026; effective 20 February 2026. Introduces AI content labelling, 180-day data retention, publisher liability for commentators.
- Shreya Singhal v. Union of India (2015): Struck down Section 66A; established online speech protected under Article 19(1)(a); intermediaries remove content only on court order or government notification, not private complaints.
- Article 19(1)(a): Right to freedom of speech and expression.
- Article 19(2): Permissible grounds for restriction — 8 specified grounds; restrictions must be reasonable and proportionate.
- Significant Social Media Intermediaries (SSMIs): Platforms with 5 million+ registered users in India — includes Twitter/X, Meta (Facebook/Instagram/WhatsApp), YouTube, Telegram, and Snapchat.
- MeitY: Ministry of Electronics and Information Technology — nodal ministry for IT Act and IT Rules.
- Internet Freedom Foundation (IFF), Access Now, Amnesty International, Human Rights Watch, and leading press bodies have all called for withdrawal or substantial revision of the 2026 rules.
- India ranked 151 out of 180 countries in the Reporters Without Borders World Press Freedom Index 2024.