Jinoy Jose P

In his seminal book Selfish Gene, evolutionary biologist Richard Dawkins discusses, among many important things, why humans have to be altruistic. If there is a human moral to be drawn, Dawkins writes, it is that we must teach our children altruism, for we cannot expect it to be part of their biological nature. Why? Because evolutionary biology tells us that we are programmed genetically to survive by ‘cheating’ or, in other words, by finding the smartest way to replicate. And, hence, it is important to teach ourselves to be kind towards each other, have empathy, follow rules and not discriminate against the poor and vulnerable. Only humans have this ability to go against the tide.

Artificial intelligence faces a similar scenario today. Does machine intelligence, which bears all the biases and prejudices of the humans that code and create it, have to be programmed to be altruistic? Amnesty International, the global human rights watchdog which among many agencies is worried about the ethical glitches in artificial intelligence, thinks machine learning systems have to be trained to be rational and humane, and should respect human rights. The NGO’s recent Toronto Declaration poses some interesting and important questions on this.

Inherent biases

Amnesty International says there is a “substantive and growing body” of proof showing AI systems, which can be “opaque and include unexplainable processes”, and can easily contribute to discrimination if unchecked.

To pick an early example, in May 2016, a investigative journal ProPublica found out that a software that the US administration was using across the country to predict future criminals was biased against blacks. As Virginia Eubanks has discussed in her recent book, Automating Inequality new-age technologies such as Big Data and AI are used to tamper with justice.

To give a fresh example, just this week the BBC reported the story of a man, Ibrahim Diallo, who was fired by a machine thanks to an error of judgment which could be justified in the world of automation and AI but is an easily resolved problem in a human universe — his entry pass had failed to work and he relied on the security guard to enter office, which the artificial intelligence failed to figure out.

Such incidents are the reason why agencies like Amnesty says machines have to be taught to respect issues such as privacy, freedom of expression, participation in cultural life, equality before the law, and meaningful access to remedy.

This is important for many reasons. For one, big tech companies, despite the occasional rhetoric about ethical AI, are in no mood to regulate their AI experiments. The argument is such controls will backtrack innovations in the sector. But that doesn’t hold the test of logic considering the direct damage AI can cause. Consider the development of AI weapons (killer robots in other words). So many companies, including Google, are undertaking or being part of AI experiments in military domains. Just recently, Google drew flak for its involvement in the so-called Project Maven, which reports suggest is a drone programme commissioned by the US military.

The programme is a tight-lipped affair but reports say it uses image recognition algorithms that help drones scan people in real time and place their identity. Human rights organisations have time and again criticised US drone programme for their rights violations and, under pressure, Google said it would not renew its contract for Project Maven when it expires next year. Google also released a set of principles on its AI plans and hinted it would stay away from AI in weaponry.

India, which is keenly watching this space and has recently dished out a set of official reports (NITI Aayog and the AI Task Force) on the potential of AI in the country, has a lot to learn from the AI mishappenings abroad and apply the lessons here to ensure India’s AI sector respects human rights, ethics, privacy and freedom of speech.

India is looking forward to harnessing AI and Big Data in areas such as smart cities, healthcare, education, governance, labour, banking and many allied and emerging areas. Considering the size and breadth of these sectors, it is advisable we formulate meaningful guidelines to make sure AI doesn’t have the biases of its creators and acknowledge its fallibility and create alternative plans in case of failure. AI, to be sure, should be one of the solutions.

Of course, it is too early to worry about the kind of AI catastrophe that the likes of Stephen Hawking or Elon Musk have been paranoid about, but the earlier we make AI egalitarian and humane, the better. Being the most important element in the current wave of technologies, AI should be allowed to experiment and expand its horizons. But it is equally important to make it learn altruism and not go the social Darwinist way, which would be the wrong kind of evolution of machines. The way humans have evolved indeed have some lessons for the machines.

comment COMMENT NOW