What Happened
- During the 2026 US-Iran conflict, AI-generated fake satellite imagery depicting military installations and strike damage was widely circulated on social media as disinformation, creating over 100 million views of false material.
- Iran's state-aligned media (Tehran Times) posted fabricated "before vs. after" satellite images purporting to show destroyed US radar equipment at a base in Qatar — the image was an AI-manipulated version of a Google Earth image from a US base in Bahrain, containing gibberish coordinates and an invisible SynthID watermark identifying it as AI-generated.
- The New York Times identified over 110 unique AI deepfakes conveying pro-Iran messages through battlefield images, missile strike depictions, and war footage.
- This conflict is being studied as the first large-scale, coordinated wartime deployment of generative AI deepfakes in cognitive warfare — a "laboratory study" in AI-powered disinformation at scale.
- Detection relies on tools like Google's SynthID (watermarking), OSINT (Open Source Intelligence) verification techniques, and reverse image analysis.
Static Topic Bridges
Cognitive Warfare and Information Operations
Cognitive warfare refers to the deliberate manipulation of an adversary's perception, understanding, and decision-making through targeted information operations. It is distinct from conventional kinetic warfare and has become a critical dimension of modern conflict.
- Definition: Cognitive warfare aims to shape beliefs, erode trust, manufacture consent, and paralyze decision-making — without physical combat.
- Historical roots: Psychological operations (PSYOP) have been used since World War II; the digital age has dramatically scaled their reach and speed.
- Modern tools: Social media amplification, AI-generated content (deepfakes, fake images, synthetic voices), bot networks, coordinated inauthentic behaviour.
- Fog of War in the digital age: Clausewitz's concept of "fog of war" (uncertainty in battle) is amplified by AI disinformation — both sides, allied nations, and civilians struggle to distinguish reality from fabrication.
- India's relevance: India has experienced information operations along its borders (Pakistan-sponsored OSINT manipulation, China's coordinated social media campaigns) — particularly during the Galwan crisis (2020) and the 2023 Manipur violence.
Connection to this news: The US-Iran conflict demonstrates that generative AI has crossed the threshold from a hypothetical disinformation risk to an active weapon in contemporary warfare — making AI governance and media literacy strategic imperatives.
Deepfakes and Generative AI — Technical Dimensions
Generative AI creates synthetic content (images, video, audio, text) indistinguishable from authentic content. The rapid democratisation of these tools has lowered the barrier to producing disinformation.
- Generative Adversarial Networks (GANs): Two neural networks (generator and discriminator) compete to produce and detect synthetic content — the basis of early deepfakes.
- Diffusion Models (2022 onwards): Current state-of-the-art for image generation (Stable Diffusion, DALL-E, Midjourney) — produce photorealistic images from text prompts. Satellite imagery is a new category of fabrication.
- SynthID: Google DeepMind's invisible watermarking technology that embeds imperceptible signals in AI-generated images and audio — detectable algorithmically, not visually.
- OSINT (Open Source Intelligence): The use of publicly available information (satellite imagery, social media, geolocation data) to verify or debunk claims. Groups like Bellingcat pioneered OSINT verification; fake "OSINT accounts" now mimic credible investigators to spread disinformation.
- Detection challenge: AI-generated images are increasingly indistinguishable from authentic ones — watermarking and provenance standards (C2PA — Coalition for Content Provenance and Authenticity) are emerging as countermeasures.
Connection to this news: The fake satellite imagery in the US-Iran conflict exploited the credibility people associate with satellite data and OSINT — using AI to fabricate evidence that appears objective and verifiable.
India's Regulatory and Strategic Response to Disinformation
India has been developing its institutional and legal response to online disinformation, though gaps remain in addressing AI-generated synthetic media specifically.
- IT Rules 2021 (Rule 3(1)(b)(v)): Significant Social Media Intermediaries (SSMIs) must not host content that "deceives or misleads" users about its origin or nature.
- IT Amendment Rules 2023: Proposed a government-run Fact Check Unit (FCU) to flag "fake or false or misleading" content about government activities — struck down by Bombay High Court (April 2024) as violative of Article 19(1)(a).
- PIB Fact Check Unit: Ministry of Information & Broadcasting operates a fact-checking arm; not empowered to compel takedowns independently.
- Deepfakes Guidelines (Nov 2023): MeitY issued advisory to social media platforms to detect and remove deepfakes; platforms warned of criminal liability under Section 66D (impersonation) and Section 66E (privacy violation) of IT Act 2000.
- Election Commission of India: Issued Model Code of Conduct guidelines covering social media and disinformation ahead of elections; increasing use of AI-generated political content in campaigns.
- National Cyber Security Policy 2013: Provides the overarching framework; a revised policy is under development.
Connection to this news: India's current legal framework is not specifically equipped to handle AI-generated synthetic satellite imagery used in conflict disinformation — the gap highlights why the upcoming Digital India Act (replacing IT Act 2000) needs explicit deepfake and synthetic media provisions.
Satellite Imagery as Intelligence — GEOINT and its Dual-Use Nature
Satellite imagery has transitioned from an exclusive state intelligence asset to a commercially available resource. This dual-use nature creates new risks when AI can fabricate imagery that mimics credible sources.
- GEOINT (Geospatial Intelligence): The exploitation of satellite and aerial imagery for national security purposes — traditionally the domain of agencies like NRO (US), ISRO (India), and similar bodies.
- Commercial satellite operators: Planet Labs, Maxar Technologies, Airbus Defence — provide near-daily imagery of any location globally, now accessible to journalists, NGOs, and the public.
- Resolution and capabilities: Commercial imagery now achieves 30cm resolution — sufficient to identify individual vehicles and infrastructure changes.
- India's reconnaissance satellites: RISAT (Radar Imaging Satellite) series, Cartosat series — used for both civilian mapping and strategic surveillance.
- OSINT verification: During the Kargil conflict (1999), India was denied commercial satellite imagery by the US; today India has independent capability through ISRO and commercial partnerships.
Connection to this news: The weaponisation of fake satellite imagery exploits the institutional trust built around GEOINT — making verification skills and watermarking standards as strategically important as the imagery itself.
Key Facts & Data
- Conflict context: US-Iran conflict, June 2025 onwards (first large-scale wartime generative AI disinformation campaign)
- Fake imagery identified: 110+ unique AI-generated deepfakes (pro-Iran content) identified by New York Times
- Total reach: 100+ million views of false material on social media
- Example: Tehran Times posted fake "before/after" of US Qatar base — actually AI-manipulated Bahrain base image with gibberish coordinates and SynthID watermark
- Detection tool: SynthID (Google DeepMind) — invisible watermark in AI-generated images
- India's deepfakes advisory: MeitY, November 2023 (IT Act Sections 66D, 66E applicable)
- IT Rules 2021 Amendment Fact Check Unit: Struck down by Bombay HC, April 2024
- India's satellite assets: RISAT series (SAR), Cartosat series (optical) under ISRO
- C2PA: Coalition for Content Provenance and Authenticity (industry standard for verifiable media provenance)
- Cognitive warfare relevance: First codified by NATO in its concept framework (2021); India's National Security Strategy addresses information warfare