As the chair of Global Partnership of Artificial Intelligence (GPAI), India has an opportunity to contribute progressively to international AI governance discourse.

However, for Indian ideas to resonate globally it should shift away from traditional notions of command-and-control regulation premised on prescriptive compliance and liability. Since technologies like AI evolve at exponential rates there is an inordinate risk of widespread non-compliance. Additionally, enforcement becomes challenging, and regulations quickly become redundant. This creates widespread uncertainty and undue liability risks. Ultimately, prescriptive regulation can inhibit competition since only those market participants with adequate risk appetite will continue to innovate.

Instead, India should advocate partnerships which pursue flexible safeguards, transparency, knowledge sharing, accountability, economic growth, and development.

India’s GPAI stewardship could echo contemporary international developments like the US’ Presidential executive order (EO) on AI safety and security, the G7 Hiroshima AI Process, and other voluntary commitments made by tech majors at prior government interactions.

AI stewardship

Here are five ideas for India’s AI stewardship.

First, governments must raise their capacity to engage with AI’s wide applicability across domains like healthcare, climate change, financial services, education, agriculture, housing, and urban development. Such broad applicability requires knowledge exchange. MeitY, under its IndiaAI initiative, should facilitate a whole-of-government approach where different sectoral authorities collaborate with stakeholders to develop a publicly accessible repository of AI deployments and use cases. Sectoral authorities can subsequently commence dialogues around developing sector-specific codes of practice on responsible AI development.

Second, robust standards development will assist with quality assurance. Technical institutions like the Bureau of Indian Standards (BIS) and the Standardisation Testing and Quality Certification (STQC) Directorate must be equipped with adequate resources to pursue such objectives. India should also pursue government-to- government MoUs through which these institutions can collaborate with international counterparts like the US Department of Commerce’s National Institute of Standards and Technology.

Third, India should commence an international project to explore solutions to mitigate the negative impact of deepfake technologies. India’s current criminal and intermediary liability systems only offer post facto remedies. However, the damage from malicious deployments commences upon creation and distribution. The US EO earmarks digital watermarking as a possible solution. India should commence dialogue with international initiatives like the Coalition for Content Provenance and Authenticity to understand the capabilities, limitations and scalability of such systems.

Fourth, the US and the UK have announced plans to establish national AI Safety Institutes to oversee ‘frontier’ AI models. This is to manage the unintended consequences of powerful AI models, which are vulnerable to being used for cyber-enabled attacks against critical information infrastructures. India should set up a similar AI safety institute which closely works with industry and cybersecurity institutions like CERT-In and the NCIIPC.

Finally, AI’s risks are well documented across criminal justice/policing, housing, financial services and healthcare. The risks intersect with issues like accuracy, bias, discrimination, exclusion, citizen privacy, etc. As governments explore how AI can improve public service delivery and other government functions, public trust will be imperative for long run sustainability. India should establish legislation which safeguards citizens’ rights against the risks of Government AI deployments.

The writer is Public Policy Manager at The Quantum Hub

comment COMMENT NOW