What Happened
- A high-profile deepfake attack in India demonstrated how AI-generated video and voice impersonation during live video calls nearly defrauded multiple corporate targets, with investigators noting "many people could have been cheated."
- Cybercriminals used publicly available videos, social media content, and voice recordings to construct real-time deepfakes of senior corporate executives, then deployed these during video conference calls to authorise fraudulent financial transactions.
- The attack follows a pattern seen globally — a Rs 22,000 crore loss to cyber fraud was reported across India in 2024 — and coincides with a 148% surge in AI-generated impersonation scams.
- The Union Government notified the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2026 in February 2026, specifically targeting deepfakes with a 3-hour takedown mandate and mandatory AI content labelling.
Static Topic Bridges
Legal Framework for Deepfake Fraud in India
India currently addresses deepfake-related fraud through a patchwork of existing IT Act provisions and IPC sections, without a standalone deepfake law. The primary provision is Section 66D of the IT Act 2000, which penalises "cheating by personation by using computer resource" — directly applicable to deepfake impersonation fraud.
- IT Act Section 66D: Cheating by personation using computer resources — punishable with imprisonment up to 3 years and fine up to ₹1 lakh.
- IT Act Section 66C: Identity theft using electronic signature, password, or biometric — punishable with imprisonment up to 3 years and fine up to ₹1 lakh.
- IT Act Section 67: Publishing obscene material in electronic form — relevant when deepfakes involve non-consensual intimate imagery.
- IPC Section 419 (now BNS Section 318): Cheating by personation — applicable alongside IT Act provisions.
- IT (Intermediary Guidelines) Amendment Rules 2026: 3-hour window for takedown of deepfakes on social media platforms; mandatory labelling of AI-generated content.
Connection to this news: The deepfake attack in question likely falls under IT Act Section 66D — the victim was cheated via personation of a known executive using computer-generated imagery during a live call.
CERT-In and India's Cybersecurity Architecture
The Indian Computer Emergency Response Team (CERT-In), under the Ministry of Electronics and Information Technology (MeitY), is India's national nodal agency for cybersecurity. It was established under Section 70B of the IT Act 2000. CERT-In issues advisories, coordinates incident response, and maintains the National Cyber Coordination Centre (NCCC) for real-time threat monitoring.
- CERT-In was established under IT Act Section 70B; it has powers to direct organisations to report cybersecurity incidents within 6 hours (amended rules, 2022).
- The 6-hour mandatory incident reporting rule (April 2022) applies to service providers, data centres, government entities, and VPN providers.
- National Cyber Security Policy 2013: India's foundational cybersecurity policy framework — articulates goals of securing cyberspace, protecting critical infrastructure, and building a cybersecurity workforce.
- NCCC (National Cyber Coordination Centre): operates 24x7 for threat intelligence sharing across government departments.
- Data Protection aspect: The Digital Personal Data Protection (DPDP) Act 2023 — notified in August 2023 — will add a layer of accountability for misuse of personal biometric data used in deepfakes.
Connection to this news: Deepfake attacks that use stolen biometric data (face, voice) constitute both a cybersecurity incident (CERT-In domain) and a potential data protection violation (DPDP Act domain), placing them at the intersection of India's evolving digital governance framework.
Deepfake Technology: How It Works and Why It Matters
Deepfakes use Generative Adversarial Networks (GANs) or diffusion model-based AI to synthesise hyper-realistic video and audio of real individuals. Real-time deepfake tools — enabling live manipulation during video calls — represent a qualitative escalation from pre-recorded synthetic media. These tools are increasingly accessible via commercial "Deepfake-as-a-Service" platforms.
- GANs consist of two neural networks: a generator (creates fake content) and a discriminator (evaluates realism) — they improve iteratively against each other.
- Real-time deepfake tools can now overlay a different face and voice onto a live video feed with minimal latency using a consumer GPU.
- No AI model can currently reliably detect deepfakes in real-time — detection lags behind generation capability.
- AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks globally in 2025.
- Typical fraud script: urgent email from "boss" → video call to reinforce trust → instruction to transfer funds or change bank details.
Connection to this news: The attack described in the article exploited real-time deepfake capability during a video call — a threat vector that existing verification protocols (visual confirmation, voice authentication) are inadequate to counter.
Key Facts & Data
- India cyber fraud losses in 2024: Rs 22,000 crore across the country.
- AI-generated impersonation scam surge: 148% year-on-year (2024-25).
- IT (Intermediary Guidelines) Amendment Rules 2026: 3-hour deepfake takedown mandate, notified February 10, 2026.
- IT Act Section 66D penalty: imprisonment up to 3 years + fine up to ₹1 lakh.
- CERT-In mandatory incident reporting window: 6 hours from detection (amended rules, April 2022).
- DPDP Act 2023: India's data protection law — provides framework for biometric data misuse accountability.
- Deepfake-as-a-Service platforms: now commercially available; some operating from outside India's legal jurisdiction.
- National Cyber Security Policy 2013: India's foundational policy document — currently under revision toward a 2025 update.