The Indian government has recently issued an advisory, emphasizing the necessity for tech companies to seek government approval before releasing new artificial intelligence (AI) tools to the public. This directive, unveiled by the Indian IT ministry on March 1, underscores the significance of pre-release authorization, particularly for AI tools deemed “unreliable” or still in experimental phases. The ministry underscores that such tools must be clearly labeled, cautioning users about the potential for inaccurate responses to queries. The advisory specifies that deployment of these AI tools on the Indian internet must be undertaken with explicit consent from the Government of India.

Moreover, platforms are urged to ensure that their tools do not pose a threat to the integrity of the upcoming electoral process, with general elections slated for this summer. This advisory follows recent criticisms leveled at Google and its AI tool, Gemini, by a prominent Indian minister. Concerns were raised regarding the tool’s tendency to provide biased or inaccurate responses, including one instance where it allegedly characterized Indian Prime Minister Narendra Modi as a “fascist.” In response, Google acknowledged the limitations of Gemini, particularly in addressing contemporary social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, underscored the legal obligation of platforms to ensure safety and trust, emphasizing that reliability issues do not exempt them from compliance with the law. While India has been proactive in introducing regulations to address the proliferation of AI-generated deepfakes ahead of elections, there has been pushback from the tech community regarding the latest advisory. Some argue that excessive regulation could stifle India’s leadership in the tech sector. Responding to these concerns, Chandrasekhar emphasized the need for legal consequences for platforms facilitating or generating unlawful content.
He reiterated India’s commitment to AI, emphasizing that the advisory guides those deploying experimental AI platforms online. It ensures compliance with Indian laws, fostering a safe and trusted digital environment. Despite regulatory discussions, India remains devoted to advancing AI technology. Microsoft’s partnership with Indian AI startup Sarvam introduces an Indic-voice large language model (LLM) on its Azure AI infrastructure. This collaboration aims to expand AI accessibility across the Indian subcontinent, reflecting India’s ongoing efforts in technological innovation.
India’s latest advisory highlights the government’s proactive stance on regulating AI technologies. It aims to balance innovation with accountability, ensuring a secure digital landscape for its citizens. This collaboration underscores India’s ongoing commitment to embracing technological advancement. By fostering a conducive environment for AI development, India seeks to harness its potential for societal benefit while addressing associated challenges.
