In the 1968 science fiction film 2001: A space odyssey, a supercomputer called HAL turns rogue and starts taking control of the mission. A computer run amuck because it can think beyond what it has been programmed to do! This is what came to mind when I read about the engineer at Google who claimed that his company’s LaMDA system had become sentient. Sentient is a term that you normally associate with Buddhist literature, not computer engineers. But this engineer was trying to tell us that this system was an intelligent being.

That is worrisome, that a computer can now pretend to be a human being. The reaction of the company was even more odd — they suspended him. Was it because he was talking out of turn, or were they wondering about his judgment and ability to continue working on the project?

Be that as it may, robots and artificial intelligence (AI) have been making progress in leaps and bounds. I’m clubbing the two together since their impact on what happens to jobs and our societies are similar. And the question is: Should we adjust to robots and AI or should they adjust to us?

Different paths

Answering this question can take us down different paths. Research shows that in families that use the so-called digital assistants like Alexa or Siri, children develop an authoritative voice which is required by these machines rather than more polite interactive styles. I suspect that children’s perception in their early years of their parents as all-knowing beings takes a hit and has consequences for family norms and subsequent behaviour.

Technologists are thrilled with the growth of AI and robots that make use of them. Productivity goes up, and output is not affected by irascible human behaviour. With unemployment at a low 3.6 per cent in the US, and rising wages, many companies are investing in robots to meet their production goals. And we are already seeing the effects. Orders for robots are said to be 40 per cent higher this year compared to last.

Use of robots was traditionally confined to areas of production that were extremely repetitive or considered unsafe, such as welding and painting on auto assembly lines. Nowadays, robots are said to be used in other industries such as food processing, pharmaceuticals, etc., and in complex tasks. The machines have helped to speed up process, and allow for schedule changes quickly thereby cutting time to market. Reports show that in some operations, where a three-person crew would do a task, now only one person is needed along with a robot.

Problems in the long run

That is a good deal at a micro level. But how should society handle the problem over the long run? When machines begin to handle more and more human jobs, is society prepared with minimum basic income plans to deal for its people? Labour shortages come and go, people live longer and excessive reliance on automation will leave a lot of people surplus. And surplus people, even when their minimum needs are taken care of, don’t just sit at home but create other issues in society.

And the people who will become surplus are not just the labourers in risky back-breaking jobs whose work (thankfully) has been taken over by the machine. They are also the programmers and mid-level managers whose jobs are being taken over by those AI programs. RioTinto, the mining company, faced with difficulty in getting machine operators and labourers moved into an environment of driverless vehicles and robots, all managed from remote centers. And now, it finds it is competing with other industries to attract the data scientists and systems engineers and will look for AI ways to replace them.

Meanwhile, the dockworkers union in the US has taken a stand on continuing automation. Having given in to management demands in the past, the union found jobs shrinking as mobile cranes and self-driving carriers pick up and move containers reducing manually operated vehicles and loaders. But this is a greater issue than a labour-management dispute. Should governments take a stand and through fiscal and other policies direct robots and AI into areas that have the least long-term harm to society?

The writer is an emeritus professor at Suffolk University, Boston