Google does what confidantes used to by lending advice for resolving personal and professional matters. Seeking out friends for addressing life’s travesties are seemingly passé as algorithm-powered machine intelligence offers a staggering number of options. Think of a question, and Google has the answer. Smart gadgets have eased life by conducting the mundane tasks of picking the best on offer, be it a piece of clothing or a restaurant , and have thus made a vast majority believe that algorithms will turn the tables on human intelligence, and run every aspect of our lives henceforth.

The jury is still out on whether machine intelligence will make the cut in mimicking the human brain. While there is no denying that algorithms have eased life by managing the information explosion, that it does so at the cost of reducing rich diversity into a world of niches has often gone unnoticed. What it does with a vast array of large datasets and how it manipulates the same to generate preferences smacks of dystopian possibilities. Issues related to data security and the privacy of citizens are proving contentious, and not without reasons. Machine-driven artificial intelligence has its share of both intended and unintended consequences, warns Kartik Hosanagar. Drawing upon his experience of designing algorithms, he brings on the table the potential risks of being blind to the ramifications of algorithmic decision-making.

Influence and control

By design, an algorithm is a simple step-by-step method of resolving any problem by acting on available datasets to draw recommendations on our behalf. As the users interact with algorithmic suggestions, the next generation of data is generated for it to work on, and so on. In the process, biases creep into algorithmic systems, intentionally narrowing down the list of available choices and leading to the unintended consequence of creating digital echo chambers that have the potential to influence or control human behaviour. Should these algorithms be limited to serving our desires or allowed to stake control on human behaviour?

A Human Guide to Machine Intelligence weighs opportunities and challenges posed by modern algorithms to give the reader a nuanced understanding on how far they can go to serve us. There are safety-critical areas like healthcare and entertainment, where machine intelligence does have a role to play; in behaviour-centric domains like recruitments and therapies machine intelligence has the potential to go rogue. The case of Microsoft’s chatbot ‘Tay’ turning sexist and racist on social media is a case in point, as are the episodes of much-hyped self-driving cars meeting with a fatal crash. Despite all this, machine intelligence is here to stay with its promises and pitfalls. ‘To discard them now would be like Stone Age humans deciding to reject the use of fire because it can be tricky to control.’

As algorithms are fast transiting from their decision-support role to becoming autonomous decision-makers, the question of humans leaving life entirely in the hands of a computer has refuelled the man-machine debate. Though an anathema to our craving for control, there are many significant instances where we have let the machine control our life. Autopilots have been in existence for long, as have button-controlled elevators. Research has shown that more than control, it is the trust in algorithms that is central to its acceptance. Since algorithms are seen as robotic and emotionless, the challenge before researchers is to develop trust-inducing interfaces for mistrust, hostility, and fear to melt away.

Gaining public trust

Backed by latest developments on the subject, Hosanagar argues that transparency is the major factor in fostering trust in algorithms. Unless the tangled vines of transparency and trust are unfurled, people will continue to view machines for their limited ability to mimic our patterns of thoughts and conclusions. It is for this reason that electronic voting machines have yet to increase public confidence in the sanctity of the ballot. The electronic voting system is perfect example to lay bare the challenge of harnessing the power of transparency to induce greater trust in algorithms as being more difficult than one might assume. The world is yet far from a robust, tested protocol for algorithmic transparency, which remains the biggest stumbling block on its progress.

In the post-truth era, algorithms have greater challenge to win trust of its users. The problem, as Hosanagar elaborates, lies in the fact that most of the algorithms are created and managed by for-profit companies who protect them as highly valuable forms of intellectual property. If the companies were to let their algorithm source code be known to the public, the chances of the systems being manipulated to serve vested-interests can be endless. If Google were to make its source code public, internet companies can trick the search engine into ranking their websites higher, without concurrent improvements in their contents/services. Resolving the predictability-resilience paradox is next on the agenda to increase algorithms’ social acceptability.

Seized of the fact that algorithms are heading towards reaching human-level intelligence in processing data and the scale of their impact touching billions of people, Hosanagar advocates developing a set of rights, responsibilities, and regulations to negotiate the unintended consequences of algorithms, including their failures and the steps required to correct them. Without a doubt, such an initiative calls for cooperative efforts between the industry and government watchdogs, because the role of algorithms is not to accentuate human biases but to curtail them.

It is in this regard, Hosanagar’s proposal for an ‘Algorithmic Bill of Rights’ is timely at defining the boundaries of a responsible machine intelligence behaviour, because unlike chess, for algorithms the game continues even after checkmate.

Meet the author

Kartik Hosanagar is the John C Hower Professor of Technology and Digital Business and a professor of marketing at the Wharton School of the University of Pennsylvania. His writing has appeared in Wired, Forbes and the Harvard Business Review.

The writer is an independent writer, researcher and academic

comment COMMENT NOW