Just a few months ago, technology market consultancy Gartner estimated that in 2021, artificial intelligence (AI) augmentation will create $2.9 trillion of business value and 6.2 billion hours of worker productivity globally. In plain English, this is more than revolutionary. That said, most companies are still groping in the dark when it comes to harnessing the potential of AI and how to make optimum use of the cutting edge, overarching technology. In their book  Prediction Machines: The Simple Economics of Artificial Intelligence , authors Ajay Agrawal, Joshua Gans and Avi Goldfarb chart out the AI roadmap for businesses where the focus lies on the economic benefits of the emerging technology. Agrawal speaks to  BusinessLine  about the book, the tech and more.

 

The theme of artificial intelligence has produced several books in the recent past. What makes  Prediction Machines  different?

 

This book, to our knowledge, was the first that focused on the economics of machine intelligence. Some of the books that had been written already on the subject were written by generally two kinds of people ― computer scientists and futurists. This book was, to our knowledge, the first one written by economists, aimed at giving business people a framework for understanding how artificial intelligence would affect their businesses and the economy.

 

But why ‘simple economics’? We know that it’s not very simple. Did you feel the need to demystify some things considering that there exists a lot of hype around AI?

 

For a few major reasons. The first is exactly what you just said; there is a lot of hype around the subject and people are treating artificial intelligence like it was a magical thing that can do anything. So we wanted to respond by saying it is a technology that does a very specific function, which lowers the cost of production and makes prediction cheap. The second reason is that the elements of or the implications of cheap prediction could be characterised. We can use more prediction; better, faster, and cheaper. We’re already using prediction. We can use more of it. For example, at a bank, they use predictive analytics for fraud detection, spotting money laundering, know your customer sanction screening, and such. Now they are able to do these much better, faster and cheaper using AI.

 

The second thing is that when prediction becomes cheaper we start using prediction for things that we traditionally had not considered as prediction problems. For example, even as recently as five years ago, very few people characterised driving a car as a prediction problem. But that's precisely how we're solving it today. We've converted, transformed driving into a prediction problem.

 

Look at the way AI drives cars. The AI doesn't understand anything; the car doesn't understand anything. What it does is it receives information about its surroundings through sensors, like cameras and radar, LIDAR or Light Detection and Ranging. And given what it sees through the sensors, it then predicts what humans would do if they were driving this vehicle. It simply makes a prediction. It takes all the data from its environment and processes it to the AI. And then it takes an action. That action is very simple. You turn left or right, brake, accelerate and exit. But it's all based on prediction.

 

Another example is the email. Not many think of the email as a prediction problem. But anybody who uses a smart email client, like your Google Inbox, will know this. When you hit reply, the client gives you options at the bottom of your screen. That's AI. It has read the email and predicted what you might want to say in your reply.

 

These are all about the simple economics we describe. So, when production becomes cheap, we use more of it. When prediction becomes cheap, it affects the value of other things. It increases the value of competence, and decreases the value of substitutes. So the value of human prediction drops because machines can do it better, faster and cheaper. But it compliments what we do and these things become more valuable.

 

For example, human judgement becomes more valuable as prediction becomes cheaper. Robots become more valuable because the complement of prediction becomes cheaper. For example, in medical sciences, imaging devices use prediction to read medical images and detect diseases such as cancer. With the prediction capability becoming cheaper and cheaper, the value of the competent keeps the machinery that takes the scans more valuable.

 

Does this mean human intervention will vanish?

Human judgement becomes even more important, because we can apply human judgement to higher fidelity predictions. Imagine the era prior to technology like spreadsheets, think about two accountants who were applying for a job. One accountant says, “Oh, I'm very, very good at adding up numbers in my head; very fast, very efficient. And the other person says, “Well, I'm only average with numbers in my head. But I've got very good judgements. I'm good looking at the financial statements of a company and asking the right questions. So you know, what happens if interest rates go up by quarter point.

 

We might be interviewing those two accounting candidates, and we might say about the first person that he is okay; that's a very valuable skill that you can get up the numbers very quickly. We do that every day in this company. So there's a big wage premium for you. But for the second person, we say, look, it's good that you have good judgement. But every time you ask one of your clever questions, it takes us three days of work to figure out the answers. When spreadsheets arrived, the first person was told: “Look, that's very good. But just please use the computer like everybody else. It is faster and more accurate than any human.”

 

But the judgement value of that goes up, because now once you apply your judgement, every time you ask your clever questions, we just change the value in one cell in the spreadsheet and we get all the answers. And so, it amplifies the value of judgement.

 

Many businesses are worried about the failures that could happen in the initial stages of using AI and they are confused whether they should test it at the beta stage or they should wait for the industry to mature and then get into it given the costs involved.

 

It's a calculation of the benefit versus the cost. For example, in the US where they're working with autonomous cars, mistakes can be very expensive. But at the same time, the benefits can be very high. So for example, the AI has better sensory capability than humans. We just have our eyes only, but the AI has many cameras and sensors. So the point is to see when the benefits outweigh the costs.

 

Today, there is a raging debate around the ethics of AI considering the biases involved in its creation.

Ethics is a very important topic. Two things here; the first thing is in terms of liability. So who's liable (if AI makes a mistake)? AI will make mistakes because they are probabilistic. They work on probability distributions. So who's responsible, is a function of the law. So in other words, you're creating a new regulatory environment. For example, if an autonomous car meets with an accident, who's responsible? The person who made the car, the way the software is done, or the person who owns the car? Every country has to revisit its regulatory environment to deal with this.

 

There's a second ethical issue, which is bias. AI can learn human bias and even amplify them. And in that regard, there is a developing field of how to design AI to account for potential biases.

 

Do you see any particular geographies where AI applications will have much wider reception?

I think China has a very big advantage. The government has made a very big commitment to AI. The centralised character of the Chinese government has enabled them to accelerate AI research. Data is key for training AI models. In the US, collecting data from hospitals is difficult as it can raise privacy concerns. In China, they've been able to coordinate very big data sets for AI and got ahead of American AI in a lot of domains.

 

About the author

Ajay Agrawal is the Geoffrey Taber Professor of Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management. He is also co-founder of The Next 36 and Next AI, co-founder of the AI/robotics company Kindred, and founder of the Creative Destruction Lab. Ajay conducts research on technology strategy, science policy, entrepreneurial finance, and the geography of innovation.

 

comment COMMENT NOW