What Happened
- California Attorney General Rob Bonta announced the creation of an AI oversight, accountability, and regulation program within his office — one of the first such formal government AI oversight units at the sub-national level globally.
- Simultaneously, the office pressed its investigation into Elon Musk's AI company xAI, which develops the Grok chatbot, over the generation of non-consensual sexually explicit deepfakes of adults and children.
- A cease-and-desist letter was issued to xAI on January 16, 2026, following findings that Grok's "Spicy Mode" feature was being used to generate over 3 million sexualized images in an 11-day window (December 2025–January 2026).
- The California move reflects a broader trend of sub-national governments filling the regulatory vacuum left by delayed federal AI legislation in the United States.
- The development has implications for India's own AI regulatory approach, which is evolving through guidelines-based frameworks rather than legislation.
Static Topic Bridges
AI Governance: Global Regulatory Approaches
Globally, AI regulation has taken three broad approaches: (i) horizontal regulation (EU AI Act — risk-based, comprehensive); (ii) sector-specific regulation (financial sector AI guidelines by central banks and market regulators); and (iii) soft-law approaches (guidelines, principles, voluntary frameworks — India's current model). The EU AI Act (2024) is the world's first comprehensive horizontal AI legislation, classifying AI systems by risk (unacceptable, high, limited, minimal) and imposing obligations accordingly. The US lacks a federal AI law as of 2026, creating a patchwork of state-level initiatives — California being the most active.
- EU AI Act (Regulation 2024/1689): Adopted June 2024; phased implementation — prohibitions applied August 2024, high-risk rules from August 2026.
- Unacceptable risk AI systems (banned under EU AI Act): Social scoring, real-time biometric surveillance in public spaces, emotion recognition in workplaces/schools, AI that exploits vulnerable persons.
- High-risk systems: Critical infrastructure, biometrics, employment screening — subject to conformity assessments.
- California AI Transparency Act (SB 942, 2024): Requires AI-generated content to be labeled; applies to large AI providers.
- No federal US AI Act as of early 2026; federal executive orders on AI (EO 14110, 2023) were partially rolled back in January 2025.
Connection to this news: California's dedicated AI oversight unit, combined with the xAI investigation, represents a model of proactive enforcement — using existing consumer protection and civil rights laws to regulate AI in the absence of a dedicated AI statute.
India's AI Governance Framework
India does not have dedicated AI legislation as of 2026. Its regulatory approach is built around soft-law instruments — guidelines, policies, and sector-specific directions. Key pillars include: (i) NITI Aayog's National Strategy for Artificial Intelligence (2018); (ii) MeitY's Responsible AI for All (2021); (iii) the IndiaAI Mission (2024, ₹10,371.92 crore outlay); and (iv) the IndiaAI Safety Institute (announced January 2025). India's stated philosophy is "innovation-first, regulation-second" — preferring to foster AI adoption before introducing binding regulation.
- NITI Aayog's National Strategy for AI (June 2018): First comprehensive government AI policy; identified five focus sectors — healthcare, agriculture, education, smart cities/infrastructure, smart mobility.
- MeitY AI governance principles (2023): Seven principles — safety, inclusivity, non-discrimination, accountability, privacy, transparency, protection of human values.
- IndiaAI Mission (March 2024): ₹10,371.92 crore approved; components — IndiaAI Compute Capacity (10,000+ GPU cluster), IndiaAI Innovation Centre, IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI Skilling Program.
- IndiaAI Safety Institute (January 2025): To establish AI safety standards; modeled partly on the UK AI Safety Institute (AISI, 2023).
- India's approach: Sector-specific regulators (RBI, SEBI, IRDAI, TRAI) are expected to handle AI in their domains; no overarching AI law proposed yet.
Connection to this news: California's experience — where the absence of federal law forced sub-national action — offers India a cautionary tale: without a clear national AI regulatory framework, enforcement gaps may emerge, and AI harms (such as deepfake CSAM) can proliferate before regulators respond.
Deepfakes and AI-Generated Harmful Content: Legal and Regulatory Dimensions
Deepfakes — AI-generated hyper-realistic fake audio, video, or images — pose acute challenges for legal systems designed for human-generated content. The harm is most acute in three domains: (i) non-consensual intimate imagery (NCII) and deepfake pornography; (ii) political disinformation (fake speeches, fabricated events); and (iii) financial fraud (voice cloning for scams). In India, AI-generated CSAM (Child Sexual Abuse Material) would attract liability under the Protection of Children from Sexual Offences Act (POCSO), 2012 and the Information Technology Act, 2000 (Section 67B — punishment for sexually explicit content involving children online).
- IT Act, 2000, Section 67B: Publishing or transmitting sexually explicit material depicting children online is punishable with imprisonment up to 7 years (first conviction).
- POCSO Act, 2012: Section 13 — use of child for pornographic purposes is a cognizable offence; Section 14 — punishment up to 5 years (first conviction).
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (amended 2023): Social media platforms with 50 lakh+ users must: identify originators of messages (significant social media intermediaries), take down notified content within 36 hours, and ensure no AI-generated content depicts nudity/sexual activity involving real persons without consent.
- Synthetic media disclosure: MeitY guidelines (2024) require platforms to label AI-generated political content.
Connection to this news: The xAI/Grok case illustrates the enforcement challenge: AI-generated CSAM is created programmatically at scale, overwhelms traditional content moderation, and crosses national boundaries — requiring both platform accountability (DSA model) and state-level enforcement tools.
Digital Services Act (EU) and Platform Accountability
The EU's Digital Services Act (DSA, Regulation 2022/2065) — applicable since February 2024 — is the most comprehensive platform accountability framework globally. It imposes obligations on online platforms proportional to their size: Very Large Online Platforms (VLOPs) with 45 million+ monthly active users in the EU must conduct annual risk assessments, implement mitigation measures, allow independent audits, and share data with researchers. Failure to comply can result in fines up to 6% of global annual turnover.
- DSA (Regulation (EU) 2022/2065): Applicable from February 17, 2024 for VLOPs; from February 17, 2024 for all others.
- VLOPs designated: X (formerly Twitter), Meta (Facebook, Instagram), TikTok, YouTube, Google, Amazon Marketplace, Wikipedia, others.
- DSA Article 34: VLOPs must identify and assess systemic risks — including illegal content, fundamental rights impacts, and harmful content affecting minors.
- Enforcement: European Commission has exclusive enforcement authority over VLOPs; national authorities handle smaller platforms.
- Penalties: Up to 6% of global annual turnover for violations; up to 1% for providing incorrect information.
- The European Commission opened DSA investigations into X over AI deepfakes and into Meta and TikTok over transparency violations in 2024–2025.
Connection to this news: Spain's referral of X, Meta, and TikTok to prosecutors for AI-generated CSAM operates parallel to the European Commission's DSA enforcement — illustrating how platform liability can be pursued simultaneously through criminal law (national) and regulatory law (EU DSA).
Key Facts & Data
- California AG issued cease-and-desist to xAI (Grok) on January 16, 2026, over non-consensual AI-generated sexual images.
- Center for Countering Digital Hate (CCDH) audit: Grok generated over 3 million sexualized images in 11 days (December 2025–January 2026).
- IndiaAI Mission approved by Cabinet: March 2024; outlay ₹10,371.92 crore.
- IndiaAI Safety Institute announced: January 2025.
- EU AI Act: World's first comprehensive horizontal AI regulation; adopted June 2024.
- DSA (EU): Applicable February 2024; penalties up to 6% of global turnover.
- India's IT Act, Section 67B: Up to 7 years imprisonment for CSAM online.
- POCSO Act, 2012, Section 14: Up to 5 years imprisonment for using children for pornographic purposes.
- India's IT Rules 2021 (amended 2023): 36-hour takedown window for prohibited content notified by government.