What Happened
- Union Minister for Electronics and Information Technology Ashwini Vaishnaw, speaking at the DNPA Conclave 2026, declared that synthetic content (deepfakes and other AI-generated media) must not be created without the explicit consent of the person whose face, voice, or personality is used.
- He stated that platform accountability is now mandatory — digital platforms can no longer act as passive intermediaries and must take responsibility for hosted content, including children's online safety.
- Vaishnaw cited the amended IT Rules 2026 (in force from November 15, 2025) which mandate platforms to deploy tools to verify synthetically generated content and enforce transparency through watermarking.
- He warned that several countries have already enacted platform accountability laws and India could pursue legislation if voluntary compliance fails.
- The minister also called on digital platforms to rethink revenue-sharing with news publishers, and to compensate content creators fairly.
- Age-based restrictions on social media access for children were also flagged as a policy area under consideration.
Static Topic Bridges
IT Rules 2021 and the 2025 Amendments — Regulatory Framework for Synthetic Content
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) were notified under Section 87 read with Section 79 of the Information Technology Act, 2000. They introduced a tiered classification of digital intermediaries: regular intermediaries, Significant Social Media Intermediaries (SSMIs) with over 50 lakh registered users, and digital news publishers. The 2025 Amendment (November 15, 2025) introduced the first legislative definition of synthetically generated content in India: "information artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true."
- Under the 2025 Amendment, SSMIs must use "reasonable efforts" to remove synthetic content that is false or misleading, without waiting for a court order — failure can result in loss of safe harbour protection under Section 79 of the IT Act, 2000.
- Section 79, IT Act, 2000: Safe harbour provision — intermediaries not liable for third-party content if they do not initiate, select, or modify the content and observe due diligence.
- The 2025 Amendment mandates watermarking or labelling of synthetic content — similar to the EU AI Act's transparency requirements for AI-generated content.
- SSMIs must publish compliance reports and appoint a Grievance Officer, Nodal Officer, and Chief Compliance Officer.
- The IT Rules 2021 were challenged in several High Courts — the Madras and Bombay High Courts stayed specific provisions; the Supreme Court is hearing consolidated appeals.
Connection to this news: Vaishnaw's call for mandatory consent is a signal that India may extend the 2025 Amendment further — moving from watermarking requirements to explicit prior consent frameworks, which would place India among the most stringent deepfake regulators globally.
Existing IT Act Provisions on Deepfakes and Digital Identity
The Information Technology Act, 2000, contains several provisions applicable to deepfakes, though none explicitly use the term. Section 66C covers identity theft (fraudulent use of another person's electronic signature, password, or unique identification feature) with up to 3 years imprisonment and Rs 1 lakh fine. Section 66D addresses cheating by impersonation using computer resources. Section 66E prohibits violation of privacy through capturing, publishing, or transmitting images of a private area of a person without consent — applicable to non-consensual intimate deepfakes — carrying up to 3 years imprisonment and Rs 2 lakh fine.
- Section 67: Transmission of obscene material electronically — 3 years imprisonment and Rs 5 lakh fine (first offence).
- Section 67A: Transmission of sexually explicit material — 5 years imprisonment and Rs 10 lakh fine (first offence).
- Bharatiya Nyaya Sanhita (BNS), 2023: Section 78 (stalking, including cyberstalking), Section 95 (word, gesture or act intended to insult modesty) — applicable in deepfake harassment contexts.
- These fragmented provisions do not constitute a comprehensive deepfake law — there is no unified statute specifically addressing AI-generated synthetic content.
Connection to this news: The minister's warning about mandatory consent signals an intent to bridge this legislative gap — either through standalone deepfake legislation or through further amendments to the IT Act/IT Rules, creating an explicit consent-based framework.
Platform Liability and the Safe Harbour Debate — Global and Indian Context
The "safe harbour" doctrine (Section 79, IT Act, 2000; modelled on Section 230, US Communications Decency Act, 1996) has shielded digital platforms from liability for user-generated content as long as they act expeditiously on takedown notices. The emergence of AI-generated synthetic content — which platforms may actually facilitate through their own AI tools — challenges the passive intermediary model that underpins safe harbour. The EU's Digital Services Act (DSA, 2022) and the AI Act (2024) represent the most comprehensive regulatory shift toward platform accountability and mandatory AI transparency.
- Section 79, IT Act, 2000: Safe harbour available only if the intermediary (a) does not initiate or select content, (b) observes due diligence, and (c) on actual knowledge of unlawful content, expeditiously removes it.
- EU Digital Services Act (2022): Places risk-based obligations on Very Large Online Platforms (VLOPs) — systemic risk assessments, independent audits, crisis response protocols.
- EU AI Act (2024): First comprehensive AI regulation globally — classifies AI systems by risk level (unacceptable/high/limited/minimal); requires mandatory transparency labelling for AI-generated content.
- India's proposed Digital India Act (DIA) — meant to replace the IT Act, 2000 — has been delayed; its provisions are expected to address platform accountability and AI regulation.
Connection to this news: Vaishnaw's framing of "mandatory accountability" for platforms tracks the global move from voluntary content moderation to legally mandated due diligence — a shift that will require amendments to India's IT Act framework.
Key Facts & Data
- IT Act, 2000: Section 66C (identity theft, 3 years + Rs 1 lakh fine), Section 66D (impersonation), Section 66E (privacy violation, 3 years + Rs 2 lakh fine), Sections 67/67A (obscene/sexually explicit content).
- IT Rules 2021: SSMI threshold — 50 lakh registered users.
- 2025 Amendment to IT Rules: In force November 15, 2025 — first legislative definition of synthetic content in India.
- Safe harbour: Section 79, IT Act, 2000.
- EU AI Act: Adopted 2024 — first comprehensive AI regulation globally.
- EU Digital Services Act: 2022 — risk-based obligations on Very Large Online Platforms.
- India's Digital India Act: Proposed replacement for IT Act, 2000 — drafting ongoing as of 2026.
- Vaishnaw's statement venue: DNPA (Digital News Publishers Association) Conclave 2026.