Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Govt refuses to dilute AI content rules; meeting attended by Google, Meta ends in 30 mins with a firm no


What Happened

  • The Ministry of Electronics and Information Technology (MeitY) convened a closed-door meeting with major technology platforms including Google and Meta to discuss the newly amended IT Rules on AI-generated content and deepfakes.
  • The meeting lasted barely 30 minutes — MeitY Secretary S. Krishnan firmly declined all requests to dilute or delay the notified provisions.
  • Tech companies had raised concerns primarily about the 3-hour takedown mandate for flagged unlawful content, calling it operationally infeasible at scale.
  • MeitY made clear that no amendments or compliance timeline extensions are under consideration — the rules stand as notified.
  • The new rules were issued as an amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

Static Topic Bridges

IT Rules 2021 and the Intermediary Guidelines Framework

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 were framed under Section 87 of the Information Technology Act, 2000, which empowers the central government to make rules for implementing the Act. The 2021 Rules created a tiered compliance framework distinguishing between ordinary social media intermediaries and "Significant Social Media Intermediaries" (SSMIs) — defined as platforms with over 50 lakh registered users in India.

  • Framed under Section 87 of the IT Act, 2000 (rule-making power of the Central Government)
  • Safe harbor protection for intermediaries under Section 79 of the IT Act — intermediaries are not liable for third-party content if they observe due diligence
  • Failure to comply with due diligence obligations removes safe harbor under Section 79(1), exposing platforms to direct liability
  • Rule 3(1)(b) requires intermediaries to inform users not to post misinformation, impersonate others, or upload obscene material
  • Rule 4(2) requires SSMIs to identify the "first originator" of information upon a court or government order issued under Section 69A (content blocking powers)

Connection to this news: The 2026 amendments to these Rules represent the most significant expansion of intermediary obligations since the 2021 framework, specifically targeting AI-generated content and deepfakes.

2026 IT Rules Amendment — Deepfake and AI Content Regulation

The February 2026 amendment introduces comprehensive obligations for platforms that enable creation of AI-generated or "synthetically generated" content. A "synthetically generated" piece of information is defined as content artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that it reasonably appears authentic.

  • 3-hour takedown mandate: Unlawful content (especially non-consensual intimate imagery or deepfakes of real individuals) must be removed within 3 hours of a complaint (reduced from the earlier 36-hour window)
  • Mandatory labeling: AI-generated content must carry a label occupying at least 10% of screen area for visual content and 10% of duration for audio content, permanently visible
  • Metadata embedding: Platforms must embed permanent, unique identifiers or metadata in AI-generated content to ensure traceability
  • Rule 3(3): Intermediaries enabling AI content creation must deploy technical measures to prevent creation of child sexual exploitative material, non-consensual intimate imagery, content falsely depicting real individuals in deceptive ways, and other unlawful categories

Connection to this news: These are precisely the provisions tech platforms sought to dilute at the MeitY meeting — particularly the 3-hour window, which they argued is technically unworkable given content volume at scale.

Section 69A — Government's Content Blocking Power

Section 69A of the IT Act, 2000 empowers the Central Government to direct blocking of any content on the internet in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States, or public order, or for preventing incitement to the commission of any cognizable offence. The procedure is detailed in the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

  • Blocking orders under Section 69A are issued by the Secretary, Ministry of IT or a designated officer not below Joint Secretary rank
  • Platforms must comply within a prescribed time or lose safe harbor protection
  • Section 69A was upheld by the Supreme Court in Shreya Singhal v. Union of India (2015), which struck down Section 66A but upheld 69A with the condition that blocking orders must be reasoned and subject to judicial review
  • The provision is distinct from Section 69 (interception/decryption of information) and Section 69B (monitoring of traffic data)

Connection to this news: The new 3-hour takedown rule for AI/deepfake content operates alongside Section 69A powers, creating a dual-track removal system — platform-driven takedowns on receipt of complaints and government-ordered blocks under 69A.

Intermediary Safe Harbor and Platform Liability — Comparative Context

India's Section 79 safe harbor is analogous to Section 230 of the US Communications Decency Act (CDA) and Article 14 of the EU's e-Commerce Directive (now superseded by the EU Digital Services Act, 2022). The global trend is toward platform liability expansion — the EU's DSA 2022 introduced risk-based obligations for Very Large Online Platforms (VLOPs), including systemic risk assessment and algorithmic audits.

  • India's Section 79 safe harbor: Conditional immunity if intermediary (a) has no knowledge of unlawful content, (b) does not initiate or modify content, and (c) acts expeditiously to remove upon notice
  • EU Digital Services Act (2022): Risk-based approach — VLOPs (>45 million EU users) face obligations including algorithm audits, independent auditing, crisis response protocols
  • US CDA Section 230: Broader immunity — no takedown-to-retain-immunity rule (unlike India)
  • India's approach aligns with a "regulated intermediary" model rather than pure platform neutrality

Connection to this news: The tech platforms' pushback at the MeitY meeting reflects a global tension between government demands for content accountability and platforms' preference for minimal liability exposure.

Key Facts & Data

  • Meeting duration: approximately 30 minutes; MeitY Secretary declined all requests to dilute provisions
  • Takedown window: reduced from 36 hours to 3 hours for unlawful AI-generated/deepfake content
  • AI content label size: minimum 10% of screen area (visual) or 10% of duration (audio)
  • Legal basis: IT Act, 2000 (Section 87) → IT Rules, 2021 → 2026 Amendment
  • Safe harbor provision: Section 79, IT Act, 2000
  • Content blocking power: Section 69A, IT Act, 2000
  • Supreme Court precedent on 69A: Shreya Singhal v. Union of India (2015)
  • SSMI threshold: 50 lakh (5 million) registered users in India