Artificial Intelligence (AI) is constantly evolving and has strong potential to bring societal benefits, economic growth and global competitiveness. However, there are certain risks associated with AI, such as privacy violation, data biases, data security, discrimination, lack of explainability and accountability and unethical use of AI.

To address risks associated with AI, nations are in the process of formulating policy and are following a horizontal or vertical approach or a mix of both. In the horizontal approach, regulators create a comprehensive regulation that covers the many impacts AI can have. While in the vertical one, policymakers take a bespoke approach, creating different regulations to target various applications or types of AI.

Recently, the European Union (EU) approved the AI Act which has tilted towards the horizontal approach. Risk is the cornerstone of the AI Act and application of AI is categorised into four risk categories: unacceptable risk, high risk, restricted risk, and minimum or no risk. Unacceptable risk applications are banned outright. Developers of high risk AI will have to comply with rigorous risk assessment and make the data available to authorities for scrutinisation.

Interestingly, shortly before the approval of the AI Act, generative AI products were launched and they became hugely popular amongst users. The EU lawmakers introduced another category — General-Purpose AI System — to cover AI with more than one application with varying degree of risk such as ChatGPT.

Indian initiatives

Realising the immense potential that AI holds, NITI Aayog had issued the National Strategy for Artificial Intelligence in 2018 which had a chapter dedicated to responsible AI. In 2021, NITI Aayog issued a paper, ‘Principle of Responsible AI’. Seven broad principles were enumerated — equality, safety and reliability, inclusivity and non-discrimination, transparency, accountability, and privacy and reinforcement of positive human value. Due to absence of an overarching regulatory framework for the use of AI system in India, certain sector-specific frameworks were issued — namely, in June 2023, the Indian Council of Medical Research had issued ethical guideline for AI in biomedical research and healthcare; and in January 2019, SEBI issued a circular for creating an inventory in AI systems in the capital market and guide future policies. Under the National Education Policy 2020, AI awareness has been recommended to be included in school courses.

Initially there was hesitation to regulate AI. In April, the Minster for Railways, IT and Telecom said in Parliament that the government was not considering any law to regulate the growth of AI in India but recognised the risk associated with it. Subsequently, in July, TRAI issued a comprehensive consultation paper , where it recommended, among other things, setting up a domestic statutory authority to regulate AI through the lens of a “risk-based framework” and constitution of an advisory body with members from multiple government departments, academia and experts.

During the B20 meeting in August, Prime Minister Narendra Modi emphasised on a global framework on expansion of “ethical” AI. This implies establishment of a regulatory body to oversee the responsible use of AI, akin to international bodies for nuclear non-proliferation. Even in the recently concluded G20 meeting, the Prime Minister suggested international collaboration to come out with framework for responsible human-centric AI.

As India progresses on the path of AI-driven growth, it needs to strike a correct balance between regulation and cutting-edge innovation. India should have AI guardrails which will empower different stakeholders to collaborate and come out with principles which promote innovation while recognising ethical considerations, privacy concerns and biases.

The writer is Senior Consultant, Shardul Amarchand Mangaldas & Co

comment COMMENT NOW