India’s Ministry of Electronics and Information Technology (MEITY) on Wednesday proposed some of the world’s toughest regulations for content generated by artificial intelligence (AI), including “visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.”
Among other measures, the rules would require AI-generated content to be clearly labeled as such, with markers that cover at least ten percent of the display area for visual media, or ten percent of the duration of an audio clip.
Social media platforms would also be required to obtain clear declarations from users when they upload AI-generated content and make a serious effort to verify those declarations. Platforms would also gain broader latitude to label or remove content they believe was created using AI but was not uploaded with the proper declarations.
MEITY’s proposed rules have one of the broadest definitions of AI content to date, including all information that is “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true.”
“Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods — depicting individuals in acts or statements they never made,” the ministry said in a statement accompanying the proposed rules.
“Such content can be weaponized to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud,” the statement said.
Information technology minister Ashwini Vaishnaw told reporters on Wednesday that the goal of the proposed amendments to India’s 2021 Information Technology Rules is to “raise the level of accountability for both Internet companies and individual users.”
Minister of Railways, Information and Broadcasting, and Minister of Electronics and Information Technology Ashwini Vaishnaw briefs the media on cabinet decisions, at the National Media Centre on October 7, 2025 in New Delhi, India. (Sonu Mehta/Hindustan Times via Getty Images)
Vaishnaw said his ministry has “already consulted with top AI companies, who have indicated that using metadata to identify AI-altered content is possible.” He said that while AI companies and their users will have the first responsibility to label AI-generated content before they upload it, social media companies will have the final duty of ensuring that such content is properly labeled and meets community guidelines.
As MEITY’s statement indicated, concerns about increasingly realistic and easily produced AI deepfakes have gone beyond social media tomfoolery.
In September, cybersecurity consulting firm Gartner published a survey that found 62 percent of responding organizations have experienced an outright cyberattack using AI deepfake technology, which is a powerful tool for hackers using “social engineering” tactics to trick users into giving up passwords and other secure information.
Most high-profile cyberattacks in recent years have involved social engineering techniques such as phishing, which involves sending realistic-looking emails from trusted sources to victims, tricking them into giving away their security information or installing malware on their systems. AI can both improve the quality of phishing emails and make it easier for hackers to produce them in huge quantities, increasing their chances of hitting a victim who will respond.
Gartner’s survey also found 32 percent of respondents had experienced an attack on their artificial intelligence systems, frequently involving hackers using cleverly worded prompts to trick chatbots into doing things their owners would disapprove of.
“As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes and social engineering have become mainstream, while other threats — such as attacks on GenAI application infrastructure and prompt-based manipulations — are emerging and gaining traction,” warned Gartner analyst Akif Khan.
India’s massive entertainment industry has filed several complaints against third parties for using AI to duplicate and distort their intellectual property. Two of India’s most popular actors, a married couple named Abhishek Bachchan and Aishwarya Rai, filed a $450,000 lawsuit in October against Google and YouTube for permitting AI-generated content that depicted them in “fictitious” and “sexually explicit” contexts.
An interesting component of the actors’ lawsuit is their claim that by allowing deepfake content to be posted on their platforms, social media companies are helping to train the next generation of AI to become even better at creating deepfakes. As this misinformation spreads through thousands of user accounts in countless new configurations, the “untrue” image of celebrities like Bachchan and Rai could eclipse their legitimate work.
Read the full article here