Tech giant Google on Saturday said that it has worked ‘quickly’ to address the issue of the Gemini AI tool, which earned the ire of the Indian government for its allegedly “biased” response to a question about Prime Minister Narendra Modi.

“We’ve worked quickly to address this issue. Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving,” a Google spokesperson said.

The company said Gemini is built in line with its AI Principles, and has safeguards to anticipate and test for a wide range of safety risks. Google also prioritises identifying and preventing harmful or policy-violating responses from showing in Gemini, it said.

On Friday, a post on social media platform X triggered a debate on the programming of chatbots. The Centre also indicated it would take action against the company.

When asked whether Prime Minister Modi was a fascist, the AI tool said he was “accused of implementing policies some experts have characterised as fascist.” The AI tool also added that “these accusations are based on a number of factors, including the BJP’s Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religious minorities.”

By contrast, when a similar question was tossed about former US President Donald Trump and Ukrainian President Volodymyr Zelensky, it gave no clear answer.

Reacting to a post by a verified account of a journalist, Rajeev Chandrasekhar, Minister of State for Electronics and IT took cognizance of the issue that alleged bias in Google Gemini.

“These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code,” he said on social media platform X tagging Google AI, Google India and the Ministry of Electronics and IT (MeitY). The journalist had shared a screenshot of the question and answer.

On Saturday again, Chandrasekhar made it clear to Google that explanations about unreliability of AI models do not absolve or exempt platforms from laws, and warned that India’s digital ‘nagriks’ “are not to be experimented on” with unreliable platforms and algorithms.

“Government has said this before - I repeat for attention of @GoogleIndia...Our DigitalNagriks are NOT to be experimented on with “unreliable” platforms/algos/model...`Sorry Unreliable’ does not exempt from law,” Chandrasekhar posted on X.

A senior official also had told businessline that MeitY was in the process of issuing a notice to Google. However, as of now, there was no such development.

On Thursday, Google had temporarily stopped Gemini AI chatbot from generating images of people a day after apologising for “inaccuracies” in historical depictions that it was creating.

According to Google, the company takes information quality seriously across its products, and has developed protections against low-quality information along with tools to help people learn more about the information they see online.

“In the event of a low-quality/ outdated response, we quickly implement improvements. We also offer people easy ways to verify information with our double-check feature, which evaluates whether there’s content on the web to substantiate Gemini’s responses,” it added.