All faces are not considered equal by Artificial Intelligence (AI) systems! A typical commercial AI face recognition system most accurately predicts fair-skinned males. The accuracy for detecting dark-skinned women is lower by over 30 percentage points, says Joy Buolamwini, Founder, Algorithmic Justice League.

AI computer vision systems will appropriately classify and label the image of a bride in a typical western gown. However, when it was asked to classify the image of an Indian bride wearing a red sari, it is classified as an event, a costume or performing art. All brides are not equal for an AI!

In 2017, an AI automatic soap dispenser did not recognise dark-skinned hands and only dispensed soap to fair-skinned people. An AI CV shortlisting engine by tech giant Amazon gave lower priority to women for many job roles, a reflection of the male dominance of such roles. The AI engine would shortlist candidates based on the existing demographics. As a result, underrepresented gender, ethnicities and sub-groups may continue to be denied an opportunity even to get a call for the interview. A biased AI engine can steamroll years of affirmative actions.

Bias in AI may have a broad societal impact, with decision-makers often being unaware of the risks. You must be wondering as to why such situations occur.

Reasons for AI bias

There are multiple reasons for bias in AI. The primary reason is the existing bias in data. Google Brain researchers note that while India and China together constitute over a third of the global population, they form a mere 3 per cent of the images in a widely used dataset, ImageNet. The US, with only 4 per cent of the worldwide population, constitutes 45 per cent of the images. AI leverages correlation engines and produces results based on the data they are trained from. As a result, regional views such as from India are often underrepresented at the expense of the dominant western viewpoint. A search for “beautiful”, “handsome”, or “cute baby” in any search engine will show a portfolio of fair-skinned humans rather than embrace our global diversity.

AI also suffers from bias due to over-generalisation of existing data, automation bias with human decisions often being ignored over AI model outputs, as well as AI’s limited ability to generalize on data it has not been exposed to in training. Most AI experts agree that reducing bias in AI is a complex problem that will take significant time and effort to address.

Often, companies do not pay enough attention to the biases or make enough effort to correct the issue. In 2015, Google Photos classified Black men as gorillas. Rather than fix the error from the root cause, reports suggested that the company did a quick-fix by removing gorillas as a label. As a result, six years later, in 2021, an AI recommendation engine by Facebook faced a similar concern. The automated AI engine asked users who watched a video featuring Black men if they would like to "keep seeing videos about Primates."

Remedies to reduce bias in AI

Bias in AI may push marginalised and underrepresented demographics into a vicious loop unless proactive steps are taken to remedy this. We recommend AI focus on “augmented intelligence” rather than “artificial intelligence”, with a layer of human oversight that should help mitigate obvious bias in AI.

Developing more India-centric datasets and an Indic AI eco-system that incorporates India’s local views is imperative. This will help create more balanced datasets in terms of nationality, colour, gender and global diversity.

We also need the right policy to incentivize ethical AI systems. The government of India is making efforts for the same. NITI Aayog and state governments such as Tamil Nadu have issued guides on the ethical deployment of AI.

Also, earlier AI models were black box models focused on accuracy alone with limited ability to review the underlying decision parameters. Over the past few years, the evolution of Explainable AI has offered a means to identify the underlying logic for decision making. We hope a focus on the underlying assumptions in AI models rather than pure accuracy will also help mitigate AI bias.

Efforts to mitigate bias in AI are the need of the hour. We hope the concerted efforts by the government, corporates, the start-up community and academia help make AI a more neutral place.

Kamal Das isDean, Wadhwani Institute of Technology and Policy at Wadhwani Foundation . 

comment COMMENT NOW