The recently concluded Global Partnership on Artificial Intelligence (GPAI) summit in Delhi marked a significant milestone. The meet — attended by representatives from diverse nations, industry leaders, and experts — recognised the importance of establishing ethical guidelines and standards for AI technologies. It also rightly stressed the need for international collaboration in navigating the evolving landscape of artificial intelligence.

It is clear that there is a global race for AI dominance. Nations across the world are investing heavily in AI research, development, and deployment. While healthy competition can drive innovation, an unregulated race can lead to a fragmented landscape where standards, ethics, and accountability are overlooked. In this race, countries may prioritise advancements without considering the potential risks and societal impact. To prevent a dystopian future and ensure a collaborative approach, it is essential to establish international agreements and standards for AI development and use. The summit participants agreed that AI systems should be developed in a manner that upholds human rights, fairness, and accountability.

Recent global initiatives have taken significant steps towards establishing a regulatory framework that balances innovation with ethical considerations. Notable among these initiatives are the Bletchley Declaration, the US White House Executive Order, and the legislative efforts by the European Parliament and Council. There are many initiatives being undertaken by private players to put in place a framework to limit the negative impact of AI. For example, tech giants including Amazon, Microsoft, Meta, Google and OpenAI have signed a voluntary agreement to emphasise safety, security and trust when developing AI technologies. In India, industry body Nasscom has come out with a framework listing out the obligations of all stakeholders in the development of AI.

But the biggest challenge is that no one knows for certain what’s going to happen next with AI. This technology is evolving so fast that stakeholders are falling behind miserably when it comes to putting safeguards. Therefore, it is essential to recognise that the success of these regulatory efforts hinges on their implementation. International cooperation that prevents regulatory fragmentation and ensures a level playing field for AI developers and users worldwide is crucial. The global community has acknowledged the need to bridge the digital divide. Recognising the potential for AI to exacerbate existing inequalities, participants committed to promoting diversity in AI research and development, as well as addressing the socio-economic impacts of automation. Implementing the principles and agreements reached at the summit will require sustained effort and coordination among participating nations. The global community must remain vigilant in monitoring developments in AI and adapting regulatory frameworks to address emerging challenges. The GPAI summit has laid the groundwork for a collaborative approach.