Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Many nations have lauded India’s move to mandate AI labelling, says Vaishnaw, as new IT rules take effect


What Happened

  • The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, came into force on February 20, 2026, mandating clear labelling of all AI-generated and synthetically generated content.
  • IT Minister Ashwini Vaishnaw stated that several countries have lauded India's move, with three countries explicitly expressing interest in adopting a similar framework.
  • The rules require platforms to deploy automated tools to verify whether uploaded content is synthetically generated and to embed permanent metadata or unique identifiers for origin tracing.
  • Takedown timelines have been significantly tightened: court-ordered or law enforcement-directed removals must be completed within 3 hours (down from 36 hours), while non-consensual deepfake nudity must be removed within 2 hours (down from 24 hours).

Static Topic Bridges

IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

The IT Rules, 2021, were framed under Section 87(2) of the Information Technology Act, 2000, and replaced the earlier IT (Intermediary Guidelines) Rules, 2011. They provide the primary regulatory framework for social media intermediaries, significant social media intermediaries (SSMIs), and digital media platforms in India.

  • Section 79 of the IT Act, 2000: Provides "safe harbour" protection to intermediaries — they are not liable for third-party content if they follow due diligence requirements
  • SSMI classification: Platforms with 50 lakh or more registered users in India must comply with additional obligations (grievance officer, chief compliance officer, nodal contact person)
  • Three-tier grievance redressal: Platform-level grievance officer → Grievance Appellate Committee (GAC) → Courts
  • Digital Media Ethics Code: Part III of the Rules governs digital news media and OTT platforms
  • Key amendments: 2022 (Grievance Appellate Committee), 2023 (fact-check unit provisions, stayed by courts), 2026 (synthetic content labelling)

Connection to this news: The 2026 amendment adds a new layer of obligations specifically targeting AI-generated content, building on the existing intermediary liability framework by requiring platforms to actively verify and label synthetic content rather than merely responding to complaints.

Synthetically Generated Information (SGI) — Definition and Regulation

Under the 2026 amendment, Synthetically Generated Information (SGI) is defined as any audio, visual, or audio-visual content created or altered algorithmically to appear real or indistinguishable from a natural person or real-world event. This includes deepfakes, AI-generated images, synthetic voice clones, and algorithmically altered videos.

  • Mandatory labelling: AI-generated video must carry a visible watermark; AI-generated audio must begin with a spoken disclaimer
  • Provenance markers: Platforms must embed metadata that stays with the file even when shared across platforms, allowing investigators to trace the origin back to the specific AI tool used
  • User declaration: Uploaders must declare if content was made with AI; platforms must deploy automated tools to verify such declarations
  • Loss of safe harbour: Failure to label AI content or missing a takedown window results in loss of Section 79 safe harbour protection, making the platform liable as if it created the content
  • India is being described as having the world's first binding synthetic content provenance mandate

Connection to this news: The SGI provisions represent a shift from reactive content moderation (respond to complaints) to proactive content governance (verify and label at upload), placing India at the forefront of global AI content regulation.

Global AI Regulation Landscape

AI regulation is evolving rapidly across jurisdictions, with different approaches ranging from comprehensive legislation to sector-specific guidelines. India has adopted a "techno-legal" approach — using existing legal frameworks (IT Act) with technology mandates (metadata, automated verification) rather than creating a standalone AI law.

  • EU AI Act (2024): World's first comprehensive AI legislation; risk-based classification (unacceptable, high, limited, minimal risk); requires transparency for AI-generated content
  • China: Interim Measures for Management of Generative AI Services (2023); requires AI content to be labelled and traceable
  • US: No comprehensive federal AI legislation; relies on sector-specific guidance and executive orders
  • India's approach: No standalone AI law; regulation through IT Act amendments, advisories, and sectoral guidelines
  • India in talks with approximately 30 nations on AI regulation frameworks, according to Minister Vaishnaw

Connection to this news: India's binding provenance mandate goes further than most jurisdictions by requiring embedded metadata for tracing, not just visible labelling, positioning it as a potential model for developing countries seeking to regulate AI-generated content.

Key Facts & Data

  • Amendment notified: February 10, 2026; came into force: February 20, 2026
  • Takedown timeline: 3 hours for court-ordered content (previously 36 hours); 2 hours for non-consensual deepfake nudity (previously 24 hours)
  • SSMI threshold: 50 lakh registered users in India
  • Safe harbour provision: Section 79, IT Act, 2000
  • Countries expressing interest in India's model: 3 countries (as stated by IT Minister)
  • India in discussions with approximately 30 nations on AI regulation
  • Parent legislation: Information Technology Act, 2000 (Section 87(2))