Haven’t they already got here?

They’re here, and some of the smartest earthlings around seem to believe they will be soon taking over. According to futurists, we might be already on the march to ‘singularity’, the idea that one day the machines we’ve created might become smarter than us.

Who are these smart earthlings?

Stephen Hawking for one, who has lent his name this week to an open letter being circulated among scientists, entrepreneurs and investors asking for closer monitoring of developments in artificial intelligence to ensure that it doesn’t harm human interests.

So we need to be wary of AI?

Oh, that’s another thing. If you want to be exact, you can’t call it artificial intelligence any more.

Why not?

Because artificial intelligence suggests computers are capable of human-like thought. They aren’t yet. So what they do is called ‘machine learning’, which is teaching them to sieve through masses of data until the machine itself begins to spot patterns and arrive at decisions without being specifically programmed to do so.

Aha! So they’re still a long way of from aping us!

A growing field of specialisation within machine learning is ‘deep learning’, which does, in fact, attempt to replicate — and even better — human thinking.

How bad can it get?

Hawking has often said AI with the ability to re-design itself could potentially be more dangerous than nuclear weaponry and end the human race since there is no way we can compete, limited, as we are, by “slow biological evolution”.

But AI can also make life easy, can’t it? Just ask Siri.

Of course it can. But what’s also important is avoiding the pitfalls. The question seems to be one of control, like Hawking has pointed out: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

What sort of control is this?

Take the subject of machine ethics. How does a self-driving car choose when given a trade-off between the possibility of minor damage to a human passenger and near-certain expensive material damage to an inanimate object?

Why is this sudden focus on machine learning?

The single biggest reason seems to be the advent of Big Data, large data sets that require large and complex automated analysis. Several start-ups now focus on finding patterns using Big Data, some of which can be made easier through using machine learning. Here, the machine is taught how to make sense of Big Data without being programmed specifically to look for only certain patterns, the way old-fashioned algorithms do. As a result, AI itself has been attracting a lot of venture capital money in the last few years.

So once the machine learns, we won’t need to command it any longer.

Which seems to be the reason for widespread fear. Entrepreneurs are developing AI systems that can replace white-collar workers in human intelligence jobs such as financial analysis or writing a news report. A start-up called Vicarious has proved it can get machines to solve Captchas, visual puzzles used by websites to differentiate between human beings and bots. As machines learn to do everything we could, the real fear may not be about getting nuked — it might be unemployment.

A weekly column that helps you ask the right questions

comment COMMENT NOW