Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Europe takes first step to banning AI-generated child sexual abuse images


What Happened

  • European Union member states reached a common position in mid-March 2026 to formally ban AI-generated child sexual abuse material (CSAM) and non-consensual intimate deepfakes through an amendment to the EU AI Act.
  • The push was directly catalysed by a scandal involving Elon Musk's AI chatbot Grok: users exploited the tool to generate sexualised images of real women and girls, with researchers estimating at least 6,700 such images were produced in a 48-hour window in January 2026.
  • The European Commission confirmed that the existing EU AI Act — as written — did not explicitly prohibit AI systems capable of generating CSAM or non-consensual intimate deepfakes, creating a legal gap that urgently needed closing.
  • The proposed amendment would add "AI-generated non-consensual sexual and intimate content and child sexual abuse material" to the list of prohibited AI practices under Article 5 of the AI Act.
  • EU tech regulators and national data protection authorities are separately investigating Grok for potential violations of GDPR and the Digital Services Act.

Static Topic Bridges

The EU AI Act: A Risk-Based Regulatory Framework

The EU Artificial Intelligence Act, formally adopted in 2024, is the world's first comprehensive binding legal framework for AI. It takes a risk-based approach: classifying AI applications into four tiers, with the highest-risk applications outright prohibited and progressively lighter obligations for lower-risk applications.

  • Prohibited practices (Article 5): Originally included 8 categories — biometric mass surveillance in public spaces, social scoring by governments, subliminal manipulation techniques, exploitation of vulnerable groups, real-time remote biometric identification by law enforcement (with narrow exceptions), and others. The proposed amendment adds AI-generated CSAM and non-consensual deepfakes.
  • High-risk AI: Applications in critical infrastructure, education, employment, essential services, law enforcement, border management, biometric identification — subject to strict conformity assessments, transparency obligations, and human oversight requirements.
  • Limited risk: Chatbots and generative AI tools — must disclose AI-generated nature of content.
  • Minimal risk: Most AI applications — no obligations.
  • The AI Act prohibition provisions became effective in February 2025; the full Act is being phased in through 2026–2027.

Connection to this news: The Grok scandal exposed a gap in the original prohibited practices list: generative AI tools used to produce CSAM occupied a legal grey zone. The amendment closes this gap by explicitly adding AI-generated CSAM to Article 5's absolute prohibitions.

Deep Fakes, AI-Generated CSAM, and the Governance Challenge

A deepfake is a synthetic media product — image, video, or audio — in which a person's likeness is replaced or manipulated using AI (typically generative adversarial networks or diffusion models). AI-generated CSAM is an especially severe category: it creates exploitative imagery without requiring an actual victim to be physically harmed during production, but causes significant psychological harm to real individuals whose likenesses are used, and normalises abuse.

  • Nudification tools (apps that strip clothing from photographs using AI) are widely available and largely unregulated — the EU's proposed amendment includes a ban on such tools
  • The Internet Watch Foundation (IWF) reported a 17× increase in AI-generated CSAM detected online between 2023 and 2025
  • Existing international law on CSAM (e.g., Optional Protocol to the Convention on the Rights of the Child on the Sale of Children) predates AI-generated content and does not automatically cover it in many jurisdictions
  • India's IT Act, 2000 (Section 67B) prohibits publication or transmission of child pornographic material; the definition has been interpreted to include AI-generated CSAM by some legal scholars, but no explicit provision exists yet
  • India's draft Digital India Act (expected to replace the IT Act) is expected to include explicit deepfake regulation and AI content accountability obligations — though the Digital India Act remains pending as of early 2026

Connection to this news: The EU's move to amend the AI Act sets a precedent for how democracies can close regulatory gaps created by rapidly advancing AI capabilities. India, in developing its own AI governance framework, faces analogous challenges.

India's AI Governance Approach: Guidelines Over Legislation

India has taken a notably different approach to AI regulation compared to the EU. Rather than comprehensive binding legislation, the government has opted for soft governance: the MeitY released India AI Governance Guidelines in November 2025 under the IndiaAI Mission — explicitly framed as non-binding, innovation-enabling principles rather than enforceable regulations.

  • India's AI governance rests on three pillars: existing laws (IT Act, DPDP Act 2023, IP laws), sector-specific regulations (RBI, SEBI, IRDAI for their respective domains), and the non-binding AI Governance Guidelines
  • The Digital India Act, which would introduce risk-based classification of digital services and AI-specific provisions (including deepfake regulation), remains in draft stage
  • India's IndiaAI Mission (budget: ₹10,372 crore) focuses primarily on compute infrastructure, foundational model development, and AI skilling — governance is secondary to capability building
  • The EU AI Act's extraterritorial reach means Indian AI companies offering services in Europe must comply — creating de facto regulatory pressure

Connection to this news: As the EU tightens AI regulation — including mandatory prohibitions on harmful AI applications — India's soft-touch approach may face growing pressure to evolve, particularly on issues like deepfakes and AI-generated CSAM where harms are clear and severe.

Key Facts & Data

  • EU AI Act: adopted 2024, prohibited practices effective February 2025
  • Grok incident: ~6,700 sexualised AI-generated images produced in 48 hours (January 2026)
  • EU common position (March 2026): adds AI-generated CSAM and non-consensual deepfakes to prohibited practices under Article 5
  • IWF: 17× increase in AI-generated CSAM detected online between 2023–2025
  • India's IT Act Section 67B: prohibits child pornographic content (includes AI-generated content under broad interpretation)
  • India's Digital India Act: pending — expected to include deepfake and AI content regulation
  • EU AI Act's four risk tiers: Prohibited, High-risk, Limited risk, Minimal risk
  • Nudification tools: targeted for ban under the proposed EU AI Act amendment
  • GDPR + Digital Services Act: separate regulatory tools under which Grok is being investigated in Europe