Artificial Intelligence has found wide applicability in India — from developing better products and services to improving public policy and governance. Stanford University’s AI Index Report pegged the rate of AI skill penetration in India at 3.09 times the global average from 2015 to 2021.

This is mirrored in the increasing adoption of AI solutions by the Central government and various State governments such as the use of AI in aiding public schools in Telangana to identify farmers ineligible for PM Kisan aid. However, AI deployment is not free from human rights concerns. As AI interacts with consumers and workers on a daily basis, it can expose them to risks. Marginalised and/or vulnerable communities can be most impacted.

In a recent study, Bengaluru-based tech think tank Aapti Institute, in collaboration with the Business and Human Rights (Asia) programme at UNDP India, examined the impact of AI deployment on the human rights of consumers in finance, healthcare, and on the labour-force in gig work and retail. This work builds on existing research, which has found that a human rights-respecting approach by businesses can enhance individual and community well-being and drive sustainable economic growth.

Unpacking human rights risks

In the healthcare sector, prominent risks include inaccurate diagnoses stemming from biased datasets. Doctors in India usually diagnose heart attacks based on symptoms experienced by men. This means that any AI developed to diagnose heart attacks will under-diagnose Indian women, as the AI will be trained on data sets that are biased. Further, AI predictions on health conditions can contradict the clinician’s diagnosis, raising concerns about their ability to provide satisfactory care.

Digital lending apps — an emerging source of credit — rely on AI-based credit scoring. This method scores borrowers’ credit based on non-financial data collected through social media profiles and online purchasing history. While digital lending apps open up access to credit for those excluded from traditional modes, they have also exposed these populations to numerous risks from inaccurate credit scoring, discrimination, harassment and financial exclusion.

In retail, AI-based automation is impacting workers on two fronts. One, the replacement of workers by AI systems. Second, the use of workforce management software for tasks like attendance tracking and employee scheduling. The software operates within parameters and does not recognise subjective issues like delays due to traffic jams or internet issues leading to non-recording of attendance.

Without adequate human intervention in the operation of the software, workers have to follow rigid systems that don’t appeal to subjectivity and impact worker agency.

AI intermediation in gig work can also lead to poor work conditions. Allocation of tasks is tied to workers’ in-app ratings, determined by AI through factors like customer ratings, job rejection rate, and timely task completion. This means that workers have to endure problematic customer behaviour, as complaining can likely lead to a bad rating from the customer. Many delivery-based riders also break traffic laws to complete tasks on time, a trend noted by police forces across cities.

Poor working conditions have seen numerous protests by many gig work workers. The absence of adequate grievance redress and social security protection can exacerbate the impact.

In addition to these sector-specific risks, there is an overarching risk to privacy that cuts across sectors. Sensitive data is collected from customers and workers without adequate limitations on who this data can be shared with or for what purposes.

Company policy, regulations

While technology is often considered responsible for these risks, it is clear that the technology does not act in isolation, rather, the risks presented by the technology are rooted in business policies and overarching regulatory frameworks. In many instances, the AI is designed to function as per the requirements of the business.

Therefore, the arising risks stem more from the company policy than the technology itself. For instance, in the gig sector, incentives to workers are based on their ratings allocated by the AI algorithm, which is developed in accordance with subjective company policies.

Still, businesses can benefit from respecting human rights through reduced headline risks, better products and services, and workforce well-being. This is supported by research: a 2019 London School of Economics and Political Science study found a clear connection between employee well-being and increased returns for companies.

Regulatory frameworks also play a critical role. The lack of privacy and data protection regulation in India amplifies risks across sectors leaving consumers and workers with no or unclear remedy for misuse of their data. Legal and regulatory frameworks can guide AI deployment to be more human-rights oriented, as noted by the European Union in its regulatory framework proposal on AI.

Government and businesses

It is evident that in mitigating human rights risks from AI deployment, the State and businesses play a critical role, albeit with differing responsibilities.

Governments can incentivise businesses through policy, ensure compliance, and also establish capacity-building measures. For instance, the government can build capacity by enabling availability of resources such as unbiased and representative data. They can also support the creation of evidence on the positive economic gains from human rights compliance.

Businesses can enhance human rights protection in their corporate governance by making AI and its actions more explainable and ensuring adequate human intervention and oversight. Initiatives like Uber’s Driver Advisory Council which enable participatory models of governance can also mitigate human rights risks.

As the use of AI grows, its impact on society cannot be neglected. Our research argues that increased profits, expanded market bases, and reduced headline risk are reasons for businesses to start talking about human rights. A collaborative effort by businesses, AI developers, civil society organisations and the State would go a long way in realising the true economic and social potential of AI.

Vinay is a Senior Research Associate at Aapti Institute, and Nusrat is the Business and Human Rights national specialist at UNDP India

comment COMMENT NOW