India has implemented new regulations requiring social media and online platforms to remove artificially generated, or “deepfake,” content within a matter of hours. The rules, which came into effect on February 20, significantly tighten oversight of synthetic media and shorten the legal window for its takedown.
Stricter Timelines for content moderation
The updated guidelines, issued by the Ministry of Electronics and Information Technology, mandate that platforms act on user complaints concerning deepfakes within 24 hours. More critically, in cases where content is flagged as violating specific rules, such as those pertaining to impersonation or explicit material, the removal window shrinks to as little as two hours from the time of the complaint.
This represents a substantial acceleration of previous requirements. The move is a direct response to growing domestic and global concerns about the misuse of artificial intelligence to create non consensual intimate imagery, spread political misinformation, and conduct financial fraud through impersonation.
Expanding the Definition of Prohibited Content
The regulations formally integrate deepfakes into India’s existing Information Technology Rules. This legal classification means that any digitally altered media designed to impersonate an individual or spread falsehoods is now treated with the same urgency as other legally prohibited content categories.
Platforms failing to comply with the expedited takedown orders risk losing the crucial legal liability protections afforded to them as intermediaries under Indian law. This could expose them to greater legal risk and potential penalties.
Global Context and Industry Response
India’s action places it among a growing number of governments seeking to legislate against the risks posed by advanced AI media synthesis tools. The European Union’s Digital Services Act and several state level laws in the United States represent similar, though varied, regulatory approaches to the same challenge.
The policy shift will require major technology firms, including Meta, Google, and X, to further streamline their internal content moderation and grievance redressal mechanisms specifically for the Indian market. Industry groups have previously cautioned that overly aggressive takedown timelines could pressure automated systems and lead to potential over removal of content.
Implementation and Future Steps
With the rules now active, the focus shifts to enforcement and practical implementation. The Indian government has indicated it will monitor platform compliance closely. Analysts expect the first test cases and potential enforcement actions to emerge in the coming months as the new system is put into practice.
Further official guidance on the technical standards for identifying and reporting deepfakes may follow. The development is also likely to influence ongoing global discussions at forums like the Global Partnership on Artificial Intelligence regarding international cooperation on AI governance and content standards.
Source: GeekWire