What Happened
- Union Minister for Electronics and Information Technology Ashwini Vaishnaw stated that India needs "much stronger regulation" to address the growing threat posed by deepfakes — AI-generated synthetic media that realistically impersonate real individuals.
- The minister's remarks reflect the government's recognition that existing legal frameworks (IT Act 2000 and even the 2023 DPDP Act) are not fully equipped to address the specific harms caused by deepfake technology.
- India has witnessed high-profile deepfake incidents involving public figures, politicians, and celebrities, raising concerns about electoral integrity, reputational harm, and non-consensual intimate imagery.
- The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules were amended in November 2025 to define "synthetically generated information" and mandate significant social media intermediaries to remove such content using "reasonable efforts" — without requiring a court order.
- The minister's call signals potential further legislative or regulatory action in 2026.
Static Topic Bridges
Information Technology Act 2000 and Existing Cyber Law Framework
The Information Technology Act, 2000 is the primary legislation governing cyberspace in India. It defines offences related to computer systems, data, and online content, and has been amended over the years to address emerging challenges. While it covers identity theft, impersonation, and obscene content, it does not specifically address AI-generated deepfakes.
- Section 66C (IT Act): Identity theft — dishonestly using another person's electronic signature, password, or unique identification — punishable with up to 3 years imprisonment and ₹1 lakh fine.
- Section 66D: Cheating by personation using computer resources — up to 3 years and ₹1 lakh fine.
- Section 67: Publishing or transmitting obscene material in electronic form — up to 3 years and ₹5 lakh fine (first offence); up to 5 years and ₹10 lakh (repeat).
- Section 67A: Publishing sexually explicit material — up to 5 years and ₹10 lakh fine.
- These provisions can be applied to deepfake content but were not drafted with AI-generated synthetic media in mind — leading to enforcement gaps.
- IT (Amendment) Rules, November 2025: First legislative definition of "synthetically generated information" in India; social media intermediaries must remove such content using "reasonable efforts" without awaiting court orders.
Connection to this news: Vaishnaw's call for "much stronger regulation" acknowledges that piecemeal application of existing IT Act provisions is insufficient — a dedicated regulatory framework or amendment is needed to specifically criminalise malicious deepfake creation and deployment.
Digital Personal Data Protection Act 2023 (DPDPA)
The Digital Personal Data Protection Act, 2023 was enacted to regulate the processing of digital personal data in India, establishing rights for data principals (individuals) and obligations for data fiduciaries (entities processing data). It is relevant to deepfakes because using a person's likeness, voice, or biometric data to create synthetic content without consent constitutes processing of personal data.
- DPDPA requires "consent" for processing personal data — using someone's face/voice in a deepfake without permission violates the consent requirement.
- Data Principal Rights under DPDPA: Right to access information, right to correction and erasure, right to grievance redressal, right to nominate a representative.
- Penalties under DPDPA: Up to ₹250 crore per breach for significant violations.
- The Data Protection Board established under DPDPA can adjudicate complaints — including cases of non-consensual synthetic media creation.
- Limitation: DPDPA focuses on data processing rather than content harms specifically, creating a gap in addressing the full spectrum of deepfake harms (e.g., political disinformation, non-intimate but reputationally damaging content).
- India's AI governance framework (as of 2026) is still evolving — DPDPA and IT Rules together provide a partial framework, but no dedicated AI regulation law exists yet.
Connection to this news: The DPDPA 2023 provides a partial remedy for consent violations in deepfake creation, but its penalties and focus on data processing rather than content harm mean that a dedicated deepfake-specific law — as Vaishnaw suggests — would close significant gaps.
Artificial Intelligence Governance — Global and India Context
Deepfakes are a product of Generative AI (specifically Generative Adversarial Networks — GANs and diffusion models). Their governance raises questions about platform liability, creator accountability, and the balance between innovation and harm prevention.
- Generative AI techniques: GANs (two neural networks — generator and discriminator trained adversarially), diffusion models (reverse-noise processes); both can produce photorealistic fake images, audio, and video.
- Global regulatory approaches: EU AI Act (2024) classifies deepfake generation in certain contexts as high-risk/prohibited AI; requires labelling of AI-generated content. US: Deepfakes Accountability Act (proposed) targets non-consensual intimate deepfakes. China: Regulations since 2022 require labelling of AI-generated content and platform accountability.
- India's approach: Currently relies on IT Act + DPDPA + intermediary guidelines; no standalone AI law; November 2025 IT Rules amendment is the most specific step so far.
- Electoral deepfakes: A special concern for India given the scale of elections; the Election Commission has flagged AI-generated content as an emerging threat to free and fair elections.
- The IT Rules 2025 amendment placed obligations on "significant social media intermediaries" (SSMIs) — platforms with over 50 lakh registered users in India.
Connection to this news: The minister's call for stronger regulation reflects India's lag behind the EU AI Act and China's deepfake rules, and signals that a more comprehensive statutory framework may be forthcoming, likely building on the November 2025 IT Rules amendment.
Key Facts & Data
- November 2025: IT (Intermediary Guidelines) Amendment Rules — first definition of "synthetically generated information" in Indian law.
- ₹250 crore: Maximum penalty under DPDP Act 2023 for significant data protection violations.
- Section 66C & 66D (IT Act 2000): Existing provisions on identity theft and cheating by personation that can be applied to deepfakes.
- EU AI Act (2024): Classifies certain deepfake uses as prohibited; requires labelling of AI-generated content.
- SSMIs (Significant Social Media Intermediaries): Platforms with 50+ lakh registered users in India — bear enhanced obligations under IT Rules.
- Ashwini Vaishnaw: Minister of Electronics & Information Technology (MeitY), also holds the Railways portfolio.