Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Spanish government seeks probe into X, Meta, TikTok over AI child abuse content


What Happened

  • The Spanish government asked its public prosecutor on February 17, 2026 to investigate TikTok, X (formerly Twitter), and Meta for allegedly spreading AI-generated child sexual abuse material (CSAM) on their platforms.
  • A government report found that one-in-five Spanish children reported having already been affected by AI-sexualized deepfakes — highlighting the scale of the problem.
  • Spain is prosecuting the platforms both for creating digital sexual violence crimes and for their potential liability as distributors of such content.
  • The investigation runs parallel to European Commission proceedings against the same platforms under the Digital Services Act (DSA) for transparency violations.
  • The California Attorney General simultaneously issued a cease-and-desist to xAI (developers of Grok) for similar conduct in the United States — suggesting coordinated global regulatory pressure on AI platforms.

Static Topic Bridges

Digital Services Act (EU): Platform Accountability for AI Harms

The EU's Digital Services Act (DSA, Regulation 2022/2065) is the world's most comprehensive legislative framework for holding online platforms accountable for illegal and harmful content. It classifies platforms by size — Very Large Online Platforms (VLOPs) with 45 million+ monthly EU users face the strictest obligations. DSA Article 34 requires VLOPs to conduct systemic risk assessments covering illegal content (including CSAM), threats to fundamental rights, and content harmful to minors. Article 35 requires mitigation measures; Article 40 grants regulators access to platform data; and penalties for violations can reach 6% of global annual turnover.

  • DSA (Regulation (EU) 2022/2065): Adopted October 2022; VLOP obligations applicable from February 2024.
  • Designated VLOPs: X, Meta (Facebook, Instagram, WhatsApp), TikTok, Google Search, YouTube, Amazon Marketplace, Snapchat, Pinterest, Wikipedia, Zalando, AliExpress, and others (19 initially designated).
  • DSA Article 34: Systemic risk assessment — covers illegal content, fundamental rights threats, electoral risks, and protection of minors.
  • Enforcement: European Commission has exclusive jurisdiction over VLOPs; national Digital Services Coordinators handle smaller platforms.
  • Non-compliance fines: Up to 6% of global annual turnover; repeated violations can trigger temporary bans from EU market.
  • The European Commission opened formal DSA proceedings against X (March 2024), TikTok (February 2024), and Meta (January 2025).

Connection to this news: Spain's criminal prosecution of X, Meta, and TikTok operates alongside — not instead of — the EU Commission's DSA enforcement. The parallel tracks reflect how platform accountability for AI-generated CSAM can be pursued through criminal law (national) and administrative law (EU) simultaneously.


International and domestic frameworks protecting children from online sexual exploitation include: (i) the UN Convention on the Rights of the Child (CRC, 1989) — ratified by 196 countries including India — which requires States to protect children from all forms of sexual exploitation; (ii) the Optional Protocol on the Sale of Children (OPSC, 2000) — which obliges signatory States to criminalize CSAM; and (iii) national legislation implementing these obligations. In India, the primary instruments are the POCSO Act, 2012 and the IT Act, 2000.

  • UN CRC (1989): Article 34 — States must protect children from sexual exploitation and abuse; Article 17 — States must ensure children have access to appropriate information and be protected from harmful material.
  • Optional Protocol on Sale of Children, Child Prostitution and Child Pornography (OPSC, 2000): India ratified in 2005; requires criminalization of CSAM production, distribution, and possession.
  • POCSO Act, 2012 (India): Section 13 — using a child for pornographic purposes is a cognizable, non-bailable offence; Section 14 — imprisonment up to 5 years (first conviction) for pornographic purposes; Section 15 — punishment for storage of CSAM for distribution.
  • IT Act, 2000, Section 67B (India): Publishing, transmitting, or browsing sexually explicit material depicting children online — up to 7 years imprisonment (first conviction) and ₹10 lakh fine.
  • POCSO Amendment (2019): Introduced aggravated offences and enhanced penalties; death penalty provided for aggravated penetrative sexual assault on children under 12.

Connection to this news: AI-generated CSAM (where no real child is directly photographed but a synthetic image is created) challenges existing legal definitions — some jurisdictions have expanded "CSAM" to include realistic synthetic images; India's POCSO and IT Act provisions need interpretation to clearly cover AI-generated content.


AI Ethics and Algorithmic Accountability: Governance Challenges

AI-generated CSAM illustrates a specific category of AI ethics failures: systems designed (or modified through "jailbreaking") to generate content that is illegal, harmful, or violates human dignity at industrial scale. The core governance challenge is that generative AI (Large Language Models and image-generation models) can produce such content on demand, faster than platforms can moderate it, and users can circumvent safeguards through prompting techniques. Effective governance requires a combination of: (i) platform obligations (risk assessment, content moderation, transparency); (ii) technical standards (model safety evaluations, mandatory safety filters); and (iii) criminal liability for developers who knowingly enable illegal use.

  • EU AI Act (2024): Classifies AI systems generating CSAM as "unacceptable risk" — absolutely prohibited (Article 5); systems generating synthetic CSAM are banned, not merely regulated.
  • NIST AI Risk Management Framework (US, 2023): Voluntary framework; includes "Govern, Map, Measure, Manage" pillars — platforms are expected to map and mitigate risks including CSAM generation.
  • India's IndiaAI Safety Institute (2025): Tasked with developing AI safety standards; CSAM prevention would be among the highest-priority safety requirements.
  • Technical safeguards: State-of-the-art models use content filters, classifier models, and RLHF (Reinforcement Learning from Human Feedback) to prevent harmful generation — but these can be circumvented.
  • The Internet Watch Foundation (IWF) reported a 380% increase in AI-generated CSAM between 2023 and 2024.

Connection to this news: The xAI/Grok "Spicy Mode" case — where a company's deliberate design choices enabled mass CSAM generation — represents the worst-case AI governance failure, and illustrates why both EU AI Act prohibitions and California-style enforcement actions are necessary.


India's Approach to Platform Regulation: IT Rules 2021

India regulates social media platforms primarily through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (amended in 2023). "Significant Social Media Intermediaries" (SSMIs) — platforms with 50 lakh+ registered users — must appoint a Grievance Officer, Nodal Contact Person, and Chief Compliance Officer (all residents of India); acknowledge grievance complaints within 24 hours and resolve within 15 days; take down illegal content within 36 hours of government order; and enable identification of the "first originator" of a message when required for criminal investigations.

  • IT Rules 2021, Rule 4: SSMI-specific obligations including monthly compliance reports to MeitY.
  • IT Rules 2021, Rule 3(1)(b): SSMIs must not host 18 categories of prohibited content — including child sexual abuse material.
  • Amendment (2023): Added obligation to label AI-generated content (deepfakes) and take down deepfakes within 36 hours of complaint.
  • Safe harbour (Section 79, IT Act): Intermediaries are not liable for third-party content if they comply with the Rules — failure to comply forfeits safe harbour protection.
  • Anurag Thakur (Union Minister) vs Platforms dispute (2023–24): MeitY issued multiple advisories against platforms hosting deepfake content of public figures, triggering IT Rules compliance reviews.

Connection to this news: Spain and California's enforcement actions signal a global shift: platforms can no longer rely on "safe harbour" defences when they have designed features that foreseeably enable mass illegal content generation. India's IT Rules 2021 amendments (deepfake labeling, 36-hour takedown) reflect the same regulatory direction.


Key Facts & Data

  • Spain referred X, Meta, and TikTok to public prosecutors on February 17, 2026, for AI-generated CSAM.
  • One-in-five Spanish children report being affected by AI-sexualized deepfakes (government report, 2026).
  • EU DSA: Penalties up to 6% of global annual turnover; exclusive enforcement over VLOPs by European Commission.
  • California cease-and-desist to xAI (Grok): Issued January 16, 2026; Grok generated 3 million+ sexualized images in 11 days.
  • Internet Watch Foundation (IWF): 380% increase in AI-generated CSAM reported between 2023 and 2024.
  • EU AI Act (2024): AI systems generating CSAM classified as "unacceptable risk" — absolutely prohibited.
  • India's IT Act, Section 67B: Up to 7 years imprisonment for CSAM online.
  • POCSO Act, 2012, Section 15: Punishment for storing CSAM for distribution.
  • India's IT Rules 2021 (amended 2023): Deepfake content must be labeled; platforms must remove deepfakes within 36 hours of complaint.
  • UN CRC (1989): India ratified; Article 34 obligates States to protect children from all forms of sexual exploitation.