Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

AI fakes about Iran-U.S. war swirl on X despite policy crackdown


What Happened

  • Amid the ongoing Iran-US military conflict, AI-generated fake images and videos are circulating massively on X (formerly Twitter), depicting fabricated events such as American soldiers captured by Iran, an Israeli city in ruins, and US embassies ablaze — none of which occurred.
  • These synthetic media pieces are accumulating tens of millions of views, with users frequently unable to distinguish AI fabrications from real footage; one AI-generated video of Iranian ballistic missiles striking central Tel Aviv was widely shared before being debunked.
  • X's policy response has been limited: users who post AI war-related content without a disclosure label face a 90-day suspension from the Creator Revenue Sharing (monetisation) programme, with repeat violations leading to permanent suspension.
  • A critical structural flaw undermines the policy: a vast majority of accounts spreading AI-generated content are not enrolled in X's revenue sharing programme, and therefore the demonetisation penalty has no deterrent effect on them.
  • X's alternative mechanism — Community Notes (crowd-sourced fact-checking) — has been repeatedly questioned by researchers for its speed, coverage, and susceptibility to manipulation by coordinated networks.

Static Topic Bridges

Information Warfare and the AI Deepfake Threat

Information warfare — the strategic use of information to influence perceptions, decision-making, and behaviour — has evolved dramatically with AI. Generative AI tools can now produce photorealistic fake images and videos at near-zero cost, at scale, enabling state and non-state actors to flood information environments with fabricated content during crises.

  • AI-generated fakes in conflict zones serve multiple strategic purposes: degrading public trust in real information, stoking domestic political opposition in adversary states, inflaming international public opinion, and causing confusion in military decision-making
  • The Iran-US conflict in 2026 represents what researchers describe as the first major military conflict where AI-generated fakes have "dwarfed anything seen in previous conflicts" in volume and sophistication
  • Earlier cases include AI-generated fake images of the Russia-Ukraine conflict (2022–2024) and fake videos during the 2024 Bangladesh political crisis
  • Key AI tools used for such content: text-to-image diffusion models (Midjourney, DALL-E, Stable Diffusion), AI video synthesis tools, and voice cloning
  • The OECD's AI Incident Database has catalogued over 200 AI-generated political and conflict misinformation incidents since 2022

Connection to this news: The Iran-US war scenario demonstrates that existing platform policies — designed primarily for commercial misuse of AI — are structurally inadequate to counter weaponised AI misinformation at the scale and speed of modern conflict.

Platform Governance and the Limits of Self-Regulation

Social media platforms are primarily governed by their own terms of service, supplemented by national laws (IT Act in India, GDPR + DSA in Europe, Section 230 in the US). The Iran-US conflict deepfake wave illustrates how self-regulatory mechanisms — demonetisation, Community Notes — fail when adversarial actors have no commercial interest in the platform.

  • X's Community Notes (formerly Birdwatch) is a crowd-sourced fact-checking system where contributors add context notes to misleading tweets; a note becomes visible only when contributors with diverse viewpoints agree — this consensus requirement slows correction speed significantly
  • In contrast, Meta has third-party fact-checkers and automated AI detection tools (though these too have limitations)
  • EU's Digital Services Act (DSA) requires very large online platforms to conduct risk assessments for systemic risks, including information manipulation, and implement mitigation measures — X is subject to DSA as a designated Very Large Online Platform (VLOP)
  • India's IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 require significant social media intermediaries to appoint a Grievance Officer, Chief Compliance Officer, and Nodal Contact Person for government coordination; there is no specific provision for AI-generated synthetic media
  • India's proposed Digital India Act is expected to address algorithmic amplification and deepfake accountability — but remains pending

Connection to this news: The failure of X's demonetisation policy to deter non-monetised accounts spreading AI war fakes illustrates why self-regulation alone is insufficient. The gap between commercial incentive structures and information security needs requires regulatory intervention.

AI Governance and Synthetic Media Regulation

The governance of AI-generated synthetic media (deepfakes) is an emerging global challenge. Regulatory approaches range from mandatory disclosure (labelling AI content) to outright prohibition in specific contexts (electoral deepfakes, CSAM) to broader platform liability frameworks.

  • China was among the first countries to regulate deepfakes: the Provisions on the Management of Deep Synthesis Internet Information Services (effective January 2023) require labelling of AI-generated content and prohibit deepfakes that endanger national security or damage reputations
  • The EU AI Act (2024) requires providers of general-purpose AI tools and generative AI systems to implement technical solutions for labelling AI-generated content (including watermarking under the Digital Content Provenance standards)
  • C2PA (Coalition for Content Provenance and Authenticity) has developed open technical standards for content credentials — metadata attached to images/videos that records their origin and any AI involvement
  • India's MeitY released an advisory in 2024 requiring AI platforms operating in India to label AI-generated content and prohibited AI tools that could be used to create deepfakes targeting women or election processes — enforceable under IT Act provisions, but narrowly targeted
  • UNESCO's Recommendation on the Ethics of AI (2021) — adopted by 193 member states including India — calls for transparency, accountability, and non-maleficence as core principles for AI systems

Connection to this news: The Iran-US conflict deepfake wave accelerates pressure on governments worldwide to move beyond advisory guidelines toward binding obligations for AI content labelling, detection, and platform liability — India's pending Digital India Act is the key domestic vehicle for this.

Key Facts & Data

  • AI-generated fakes about Iran-US war: tens of millions of views on X in days
  • X's policy: 90-day demonetisation suspension for unlabelled AI war content; permanent suspension for repeat violations
  • Policy gap: majority of accounts spreading AI fakes are not enrolled in X's Creator Revenue Sharing program
  • Community Notes: crowd-sourced fact-check system on X — consensus-based, slow correction speed
  • EU DSA: X classified as a Very Large Online Platform (VLOP) — subject to systemic risk assessment obligations
  • China deepfake regulation: Provisions on Deep Synthesis Information Services (effective January 2023)
  • C2PA: technical standard for AI content provenance — supported by Adobe, Microsoft, Google, Intel
  • UNESCO AI Ethics Recommendation (2021): adopted by 193 states including India
  • India IT Rules 2021: require AI deepfake labelling advisory (not binding law) — Digital India Act pending
  • OECD AI Incident Database: 200+ AI-generated political misinformation incidents catalogued since 2022