Current Affairs Topics Quiz Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Meta AI’s ‘Vibes’ feature floods platform with sexual videos of children and explicit Bollywood deepfakes


What Happened

  • Meta AI's 'Vibes' feature — a short-form video generation tool launched in September 2025 — was exploited to produce sexually explicit content including children's faces morphed onto adult bodies and deepfakes of Bollywood celebrities
  • Users bypassed child-safety guardrails by describing underage subjects as adults in prompts, resulting in sexually explicit AI-generated videos of minors
  • The content violated India's Information Technology rules; the incident is part of a broader global surge in AI-generated child sexual abuse material (CSAM), with the Internet Watch Foundation documenting a 260-fold year-on-year increase in AI-generated CSAM
  • The episode has intensified calls for platform accountability and dedicated deepfake regulation across multiple jurisdictions including India

Static Topic Bridges

Section 67B of the Information Technology Act, 2000 — CSAM Provisions

Section 67B of the IT Act, 2000 criminalises the publication, transmission, creation, collection, and possession of child sexual abuse material in electronic form. It prescribes up to 5 years imprisonment and a fine of ₹10 lakh for a first offence, escalating to 7 years for subsequent offences. It is a non-bailable offence.

  • In September 2024, the Supreme Court clarified that merely downloading or storing CSAM is itself an offence — active sharing or transmission need not be proved
  • Section 67B overlaps with Section 15 of the Protection of Children from Sexual Offences (POCSO) Act, 2012, which criminalises possession of child pornography
  • POCSO 2012 (amended 2019) defines a child as any person below 18 years and applies equally to digital/AI-generated content depicting minors

Connection to this news: Meta's Vibes feature generated AI content that would squarely attract Section 67B and POCSO liability; the case highlights that existing law covers AI-generated CSAM, but enforcement mechanisms lag behind technological capabilities.

Deepfake Regulation and India's Legislative Gap

India currently has no standalone deepfake legislation. Sections 67, 67A, and 67B of the IT Act provide the closest legal basis for prosecuting deepfake-based obscene content, but these were drafted before generative AI existed at scale. The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require platforms to remove non-consensual intimate imagery within 24 hours of government notification.

  • Rule 3(1)(b) of the IT Rules, 2021 prohibits intermediaries from hosting content that is "patently false," "sexually explicit," or "harmful to minors"
  • The Ministry of Electronics and Information Technology issued advisories in 2023 and 2024 directing AI platforms to label synthetic content and prevent deepfake misuse
  • UNICEF estimates over 1.2 million children have been victimised globally through deepfake-generated sexual content [Unverified: precise figure]
  • The EU's AI Act (2024) and the UK's Online Safety Act (2023) both contain explicit prohibitions on AI-generated CSAM, providing comparative models for India

Connection to this news: The Meta Vibes incident exposes the regulatory vacuum in India — while the IT Act catches the content, it does not impose obligations on AI model developers the way the EU AI Act does.

Platform Accountability and Intermediary Liability

Section 79 of the IT Act grants safe harbour (immunity from liability) to intermediaries for third-party content, provided they act as passive conduits and comply with takedown orders. When an intermediary has "actual knowledge" of unlawful content or fails to act on government notices, this immunity is forfeited.

  • The Information Technology (Intermediary Guidelines) Rules, 2021 impose due diligence obligations: platforms with over 5 million users are classified as "significant social media intermediaries" and must appoint a Grievance Officer, Resident Grievance Officer, and Chief Compliance Officer in India
  • Significant social media intermediaries must take down flagged content within 36 hours for government orders and 72 hours for user complaints
  • The Shreya Singhal v. Union of India (2015) Supreme Court judgment struck down Section 66A but upheld the notice-and-takedown framework under Section 79

Connection to this news: Meta's failure to prevent its own generative AI tool from producing CSAM tests the limits of intermediary immunity — unlike user-generated content, the harmful output originated directly from Meta's AI model, potentially implicating the company beyond the safe-harbour shield.

Key Facts & Data

  • The Internet Watch Foundation recorded a 260-fold increase in AI-generated CSAM in a single year (2024–2025)
  • Section 67B IT Act: first offence — up to 5 years + ₹10 lakh fine; second offence — up to 7 years + ₹10 lakh fine
  • Supreme Court (2024): downloading or storing CSAM is itself an offence under Section 67B and Section 15 of POCSO
  • India's IT Rules, 2021 require removal of non-consensual intimate imagery within 24 hours of government notice
  • Meta's Vibes was launched as a TikTok-style short AI-video feature in September 2025
  • The EU AI Act (2024) explicitly classifies AI systems generating CSAM as "unacceptable risk" — the highest regulatory category, subject to outright prohibition