India orders social media platforms to take down deepfakes faster
Source: TechCrunch
India Tightens Rules on Deepfakes and AI‑Generated Impersonations
India has ordered social media platforms to step up policing of deepfakes and other AI‑generated impersonations, while sharply shortening the time they have to comply with takedown orders. It’s a move that could reshape how global tech firms moderate content in one of the world’s largest and fastest‑growing markets for internet services.
The changes, published (PDF) on Tuesday as amendments to India’s 2021 IT Rules, bring deepfakes under a formal regulatory framework, mandating the labelling and traceability of synthetic audio and visual content, while also slashing compliance timelines for platforms—including a three‑hour deadline for official takedown orders and a two‑hour window for certain urgent user complaints.
Why It Matters
- Market size – Over a billion internet users, predominantly young, make India a critical market for platforms like Meta and YouTube.
- Global ripple effect – Compliance measures adopted in India are likely to influence product and moderation practices worldwide.
Key Provisions
- Disclosure & Labelling – Platforms that allow users to upload or share audio‑visual content must require disclosures on whether material is synthetically generated, deploy tools to verify those claims, and ensure deepfakes are clearly labelled with traceable provenance data.
- Prohibited Content – Certain categories of synthetic content are barred outright, including:
- Deceptive impersonations
- Non‑consensual intimate imagery
- Material linked to serious crimes
- Liability – Non‑compliance—especially when flagged by authorities or users—can jeopardise a company’s safe‑harbour protections under Indian law.
- Automation Requirement – Platforms are expected to use technical tools to verify disclosures, identify and label deepfakes, and prevent the creation or sharing of prohibited synthetic content.
“The amended IT Rules mark a more calibrated approach to regulating AI‑generated deepfakes,” said Rohit Kumar, founding partner at New Delhi‑based policy consulting firm The Quantum Hub.
“The significantly compressed grievance timelines — such as the two‑ to three‑hour takedown windows — will materially raise compliance burdens and merit close scrutiny, particularly given that non‑compliance is linked to the loss of safe harbour protections.”
Reactions
TechCrunch Event
| Location | Date |
|---|---|
| Boston, MA | June 23, 2026 |
Legal Perspective
“The law, however, continues to require intermediaries to remove content upon being aware or receiving actual knowledge, that too within three hours,” said Aprajita Rana, partner at AZB & Partners. “The labelling requirements would apply across formats to curb the spread of child sexual abuse material and deceptive content.”
Civil‑Society Concerns
The New Delhi‑based digital advocacy group Internet Freedom Foundation said the rules risk accelerating censorship by drastically compressing takedown timelines, leaving little scope for human review and pushing platforms toward automated over‑removal. In a statement posted on X, the group also raised concerns about:
- Expansion of prohibited content categories
- Provisions allowing platforms to disclose user identities to private complainants without judicial oversight
“These impossibly short timelines eliminate any meaningful human review,” the group warned, adding that the changes could undermine free‑speech protections and due process.
Industry Insight
Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the government narrowed the scope to AI‑generated audio‑visual content, many other recommendations were not adopted. The sources argued that the scale of changes warranted another round of consultation to give companies clearer guidance on compliance expectations.
Historical Context
Government takedown powers have already been a point of contention in India. Social media platforms and civil‑society groups have long criticized the breadth and opacity of content removal orders.
- Elon Musk’s X challenged New Delhi in court over directives to block or remove posts, arguing that they amounted to overreach and lacked adequate safeguards.
Companies’ Stance
Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comment.
The latest changes come just months after the Indian government, in October 2025, reduced the number of officials authorized to order content removals from the internet in response to a spat with Musk’s X.
to a [legal challenge by X](https://techcrunch.com/2025/09/29/x-says-will-fight-indian-court-ruling-on-content-takedown-system/) over the scope and transparency of takedown powers.
The amended rules will come into effect on February 20, giving platforms little time to adjust compliance systems. The rollout coincides with India’s hosting of the [AI Impact Summit in New Delhi](https://impact.indiaai.gov.in/) from February 16 to 20, which is expected to draw [senior global technology executives](https://techcrunch.com/2026/01/23/openai-chief-sam-altman-plans-india-visit-as-ai-leaders-converge-in-new-delhi-sources/) and policymakers to the country.