What Happened
- Speaking at the Digital News Publishers Association (DNPA) Conclave 2026 in New Delhi, the Union Minister for Information and Broadcasting identified deepfakes and the spread of disinformation through synthetically generated content as major threats to institutional trust.
- The Minister urged digital platforms to adopt a "fair share of revenue" model for content creators — including journalists, legacy newsrooms, independent creators, influencers, and academics — warning that the government would not hesitate to bring a legal framework if platforms did not act voluntarily.
- The Minister called for stronger regulation of deepfakes, noting: "We need a stronger regulation on deepfakes. It is a problem growing day by day."
- The remarks signal that revenue fairness, content accountability, and online safety may converge into a larger regulatory reset for India's digital ecosystem.
- India was already in talks with 30 nations on an AI regulation framework, with the minister highlighting the need for international coordination to address synthetic media threats.
- The government had earlier notified amendments to IT Rules, 2021 mandating a 3-hour takedown timeline for deepfake content and requiring AI-generated content to be labelled.
Static Topic Bridges
Deepfakes and Synthetic Media: Technology and Governance Challenge
Deepfakes are AI-generated or algorithmically altered media — video, audio, images, or text — designed to realistically depict events or statements that did not occur. They are created using generative adversarial networks (GANs) or diffusion models. The threat landscape includes: political disinformation (fabricated speeches by leaders), non-consensual intimate imagery, financial fraud (voice-cloned impersonation), and undermining trust in authentic documentation. India's IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 establish the world's first official framework specifically governing deepfakes.
- Deepfake technology: Primarily GAN (Generative Adversarial Network) and diffusion model-based
- IT Rules 2026 amendments: 3-hour takedown for deepfake content (down from 36 hours in earlier rules)
- Labelling mandate: AI-generated videos must carry prominent "AI-Generated" label covering one-tenth of frame area throughout duration
- Audio deepfakes: Mandatory audible disclosure in opening seconds
- Traceability: Platforms must proactively identify and verify synthetic content
- Safe harbour risk: Failure to comply can cause loss of Section 79 protection under IT Act, 2000
Connection to this news: The minister's call for stronger deepfake regulation was followed by a regulatory instrument — the 2026 IT Rules amendment — demonstrating a shift from voluntary compliance expectations to binding obligations for platforms operating in India.
Platform Liability and Safe Harbour: Section 79 of IT Act, 2000
Section 79 of the Information Technology Act, 2000 provides "safe harbour" protection to internet intermediaries — shielding them from liability for third-party content they host, provided they meet due diligence requirements. This framework was modelled on the US Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act. The IT (Intermediary Guidelines) Rules define what "due diligence" means for different categories of platforms. Significant Social Media Intermediaries (SSMIs) — with over 5 million registered users — face heightened obligations including grievance officers, content monitoring, and monthly compliance reports.
- Section 79, IT Act 2000: Intermediary not liable for third-party content if it exercises due diligence
- IT Rules, 2021: Established three-tier grievance redressal, Grievance Appellate Committee (GAC), SSMI obligations
- SSMI threshold: >5 million registered users in India — applies to WhatsApp, Facebook, YouTube, Twitter/X, Instagram
- Appointment requirements under IT Rules 2021: Chief Compliance Officer, Nodal Contact Person, Resident Grievance Officer (all India-based)
- Loss of safe harbour: If platform promotes, permits, or fails to act against unlawful content with knowledge
Connection to this news: The minister's warning that the government would use legal mechanisms if platforms don't adopt fair revenue sharing and deepfake controls directly invokes the safe harbour framework — regulatory non-compliance can be operationalised through IT Act amendments that alter the conditions for safe harbour protection.
Press and Media Regulation: Revenue Sharing in the Digital Age
The disruption of news publisher revenues by digital platforms — particularly Google (search + news aggregation) and Meta (Facebook) — has prompted legislative responses globally. Australia's News Media Bargaining Code (2021) was the first law mandating negotiations between platforms and news organisations over revenue sharing, resulting in significant payments. Canada (Online News Act, 2023) and the European Union (Copyright Directive Article 15 — "neighbouring rights" for publishers) have enacted similar frameworks. India's I&B Minister's reference to a potential legal route echoes this global trend.
- Australia News Media Bargaining Code (2021): Pioneered mandatory arbitration between platforms and news publishers; resulted in Google and Meta signing deals totaling hundreds of millions of dollars
- EU Copyright Directive (2019), Article 15: Publishers' neighbouring rights — platforms must compensate for use of news snippets
- India's Digital News Publishers Association (DNPA): Represents major Indian news publishers on platform revenue disputes
- Advertising market shift: Digital advertising in India now dominated by Google and Meta; news publishers' share has declined sharply
- Press Council of India (PCI): Statutory media regulatory body under Press Council Act, 1978 — advises on print media; digital media regulation is under MIB through IT Rules
Connection to this news: The minister's framing of revenue sharing as a fairness imperative that may require legislation places India potentially in line with Australia and the EU — though India has not yet proposed a specific statutory mechanism comparable to the Australian News Media Bargaining Code.
Key Facts & Data
- Venue: DNPA Conclave 2026, New Delhi
- IT Rules 2026 deepfake amendment: 3-hour takedown timeline; AI-generated content labelling mandate
- Section 79, IT Act 2000: Safe harbour for intermediaries — conditional on due diligence
- SSMI threshold: >5 million registered users
- Australia News Media Bargaining Code: 2021 — first law mandating platform-publisher revenue negotiations
- EU Copyright Directive: 2019 — Article 15 creates publishers' neighbouring rights
- India in talks: With 30 nations on AI regulation framework (as of Feb 2026)
- Threat identified: Deepfakes, synthetically generated disinformation as threats to institutional trust