With the forest-fire-like spread of AI—in our lives and possibly our jobs—there is a rising fear and scepticism about how AI will improve our lives. The fear is from two directions: fear of missing out (FOMO), and fear of new technology and loss of livelihood.

As more and more applications of AI are discovered, there’s a growing discomfort—even among the proponents of AI—around the amount of control we unknowingly cede to AI and those who own it. As always, with rapidly-spreading new technology, there’s a race between control and utilising its potential.

The European Union was one of the first to act in this area by publishing its “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust” on February 2, 2020, long before all the melee around generative and general-purpose AI began. The same is the case with EU’s groundbreaking AI Act, the first draft of which was proposed in April 2021. The Act was adopted on June 14, 2023 and is set to come into effect in late 2025.

Meanwhile, the US’ approach to regulating AI has been somewhat different from that of the EU. While there is no specific Act or law for around AI regulation, the Biden administration has set new standards for AI safety with an ‘Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence’ on October 30, 2023. The order aims to exercise a soft authority on AI and leverages the powers of federal agencies, particularly those around consumer protection. On October 4, 2022, the US President Joe Biden unveiled a ‘AI Bill of Rights’, which outlined five protections that Americans should have in the AI age—safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, consideration and fallback.

The AI Act: A breakdown

Given the EU’s path-breaking legislation and enforcement around the digital economy, including the famous General Data Protection Regulation (GDPR), and that the rest of the world uses the EU legislation for a model, it’s interesting to see how the EU’s AI Act regulates AI.

The AI Act focuses on the kind of risks arising from AI applications and classifies them into unacceptable, high, limited and minimal.

Unacceptable risk in AI systems are those that are considered a threat to people and will be banned. They include: cognitive behavioural manipulation of people or specific vulnerable groups; social scoring; biometric identification and categorisation of people; and real-time and remote biometric identification systems— such as facial recognition. There are certain exceptions available for the maintenance of law and order, with specific controls.

The AI systems that negatively affect safety or fundamental rights are considered high risk and will be divided into two categories: AI systems that are used in products falling under the EU’s product safety legislation (including toys, aviation, cars, medical devices and lifts); AI systems falling into specific areas that will have to be registered in an EU database (including management and operation of critical infrastructure, education and vocational training, employment, worker management and access to self-employment, access to and enjoyment of essential private services and public services and benefits, law enforcement, migration, asylum and border control management, assistance in legal interpretation and application of the law). All high-risk AI systems will be assessed before being put on the market.

Limited risk AI systems will have to comply with minimal transparency requirements that would allow users to make informed decisions. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

The EU also has recommendations for general purpose and generative AI—disclose that the content was generated by AI; design the model to prevent it from generating illegal content; and publish summaries of copyrighted data used for training.

In addition, high-impact models would have to undergo thorough evaluations and serious incidents would have to be reported to the European Commission.

What about India?

On the home turf, India passed the Digital Personal Data Protection Act in 2023 which has its hands on data protection, privacy and consumer protection as an outcome of AI. We can also expect the proposed Digital India Bill, 2023, to have more specific rules and regulations around AI and its applications. India is also collaborating on worldwide AI policymaking by being part of the Global Partnership on Artificial Intelligence (GPAI).

So, how do we expect policies on AI to evolve? A recent EY report highlights six regulatory trends:

1. Core principles such as respect for human rights, sustainability, transparency and strong risk management.

2. Following a risk-based approach where regulations are tailored to the perceived risks of AI to values like privacy, non-discrimination, transparency and security.

3. Sector-agnostic regulation and sector-specific rules.

4. AI-related rulemaking within the context of other digital policy priorities such as cybersecurity, data privacy and intellectual property protection.

5. Private-sector collaboration with the core objective of promoting safe and ethical AI, as well as to consider the implications of higher-risk innovation associated with AI where closer oversight may be appropriate.

6. International collaboration, driven by a shared concern the risks to safety and security posed by powerful new AI systems

A large part of the EU’s AI Act spreads across all six of the trends, making it the most comprehensive even if highly critiqued approach to regulation. Recent announcements, events and government releases indicate that India has a similar path in front of it.

comment COMMENT NOW