Sounds like you’ve found religion…

Not I, but Ke Jie, the world’s top-ranked player of the Chinese board game Go, or weiqi , who was defeated last week by AlphaGo, an artificial intelligence (AI) program devised by Google’s subsidiary DeepMind Technologies. According to him, until last year, AlphaGo’s game was human-like, but this year, he said, it played “like the God of Go.”

That’s high praise.

Yes, and more generally, futurists see AlphaGo’s victory as a milestone moment in the evolution of AI, when it became God-like in the way it harnesses machine learning techniques and mimics human creativity, rendering humans redundant in the incremental learning process.

That sounds hyperbolic.

Not really. Go is arguably one of the most complex board games. Given its 19x19 grid, there are a sexquinquagintillion moves: that’s 1 followed by 171 zeroes. (In Europe, that ginormous number is called octovigintilliard.) In many ways, Go is more complex than chess, which is played on an 8x8 grid; no wonder the supercomputer Deep Blue beat chess wizard Garry Kasparov as far back as in 1997. And, of course, computers have long mastered ‘noughts and crosses’ and checkers and even Atari games.

But machines could always compute faster than humans.

Yes, but more important, mastery in Go requires intuition and creativity, not just calculating prowess. And AlphaGo’s architecture rests on deep neural networks, which is the nearest simulation of the human brain and the nervous system. Which means AlphaGo adapts to the game in real time, and improves by ‘learning from itself’. And that, to some, feels like the ‘Frankenstein moment’.

They must be technophobe Luddites.

Not unless you think of Stephen Hawking, Bill Gates and Elon Musk as Luddites. Hawking says he fears AI “could spell the end of the human race”. Gates says he is “concerned about super intelligence.” And Musk thinks of AI as “our biggest existential threat.”

But surely it’s not all doom and gloom?

Not at all. The machine learning algorithm that underlies AlphaGo will be channelled to address real-world issues through, for instance, complex disease analysis and climate modelling. And as Ajay Agarwal, Joshua Gans and Avi Goldfarb, at the University of Toronto, point out in an HBR paper, ‘The Simple Economics of Artificial Intelligence’, as machine intelligence improves, the value of human prediction skills will decrease, but the value of human judgement skills will increase. In fact, scholars see the evolution of AI rewriting some foundational economic theories as well.

How so?

Well, economic theory has long held that humans make rational decisions from out of the options they’ve been given. Building on Adam Smith’s theory about the “self-interest” motivations of “individual agents”, economists conceptualised the idea of homo economicus , or the “economic man” who is “perfectly rational”.

But humans aren’t always rational, are they?

As David C Parks at Harvard and Michael P Wellman at the University of Michigan posit in a 2015 paper in Science on ‘Economic Reasoning and Artificial Intelligence’, homo economicus is a mythical construct. But AI research is similarly drawn to rationality concepts, and AI strives to construct machina economicus , the perfectly rational machine.

Is that dream realisable?

The authors conclude that “absolutely perfect rationality” may be unachievable with finite computational resources. But that final frontier may yet be breached some day — and, perhaps, faster than we foresee today.

A weekly column that helps you ask the right questions

comment COMMENT NOW