The distant future is creeping up on us faster than we expected. Virtual reality, self-driving cars and private space missions have all made the leap from science fiction to everyday life. In an era where real wars are fought using video game controllers and a headset can create a very realistic simulation of a war in your living room, it is wise not to bet against what a well­funded R&D team can achieve.

South Korean company DoDAAM has developed a machine gun turret called the Super aEgis II, which is capable of finding, tracking and ‘neutralising’ potential threats, after delivering a warning. The warning was an afterthought, added at the request of concerned clients.

Unlike the current version of the turret, which requires a human to authorise lethal force, the original was capable of eliminating targets without any such intervention. All the help it required was to be pointed in the right direction. The Super aEgis II is intended for deployment along the Korean demilitarised zone, where it will join Samsung Techwin’s machine gun-wielding SGR-A1 which has been in use since 2010.

Ethical questions Weapons such as these are increasingly easy to build thanks to the rapid progress in Artificial Intelligence. However, we are no closer to solving the ethical questions that spew forth when you let a machine kill at will.

Last month, an open letter presented at a conference on Artificial Intelligence delivered a stark warning about the potentially disastrous consequences of the use of robots in warfare. Describing autonomous weapon systems as “the third revolution in warfare, after gunpowder and nuclear arms,” it called for a complete ban. The first signatures on the letter belonged to three instantly recognisable men — Stephen Hawking, Elon Musk and Steve Wozniak.

The letter, which stresses on the need to “maximise the societal benefit of AI” while “avoiding potential pitfalls” was also signed by thousands of other researchers, academics, scientists, entrepreneurs, philosophers and engineers working on Artificial Intelligence, in addition a number of concerned citizens.

But how close are we really to a world where robots patrol our streets and stand at our gates? The answer to that question requires a basic grounding in Artificial Intelligence.

The term robot instantly conjures up images of a humanoid machine capable of human-level intelligence, also known as Artificial General Intelligence (AGI). We are still at least a few decades away from AGI — the kind that can clean your house and play chess and solve differential equations and ultimately, perform almost any other task a human can.

Beyond AGI, lies the murky territory of the Artificial Superintelligence (ASI) — machines that are smarter than humans. The idea is that once AGI is achieved, a computer can then be entrusted with the task of recursively improving itself.

And the ruthlessly efficient, eternally untiring silicon mind of a computer would kick the pace of AI research into overdrive, which means the transition from playing chess to playing God could occur — in relative terms — almost overnight.

Far, near The prospect of ASI is ominous, but God-like computers are at least half a century away according to a survey of leading AI researchers conducted by philosophers Nick Bostrom and Vincent Mulle. Autonomous weapon systems, though, are a reality today. We have already perfected Weak AI or Artificial Narrow Intelligence (ANI) — which specialises in performing a specific task. The Roomba is a robot that can clean your house, Deep Blue can beat Gary Kasparov at chess and the Super aEgis II can defend a border from uninvited guests.

Human Rights Watch (HRW) issued a report in 2012 titled ‘Losing Humanity’ in which it warned that “robots with complete autonomy would be incapable of meeting international humanitarian law standards.” The report explains how unmanned weapon systems would result in a situation where the “burden of war would shift from combatants to civilians caught in the crossfire.”

In 2013, the issue was raised at the United Nations Human Rights Council after Christof Heyns, a UN Special Rapporteur warned that “tireless war machines, ready for deployment at the push of a button, pose the danger of permanent armed conflict.”

Over the last couple of years, countries have debated the issue through the UN Convention on Conventional Weapons, which has in the past restricted the use of napalm and blinding lasers. But progress has been slow, with delegates still trying to agree on a shared definition of automated weapons.

The US deploys robotic systems in theatres around the world in various capacities from surveillance and mine sweeping to interception and access denial. It has in recent years “spent approximately $6 billion a year on the research and development, procurement, operations, and maintenance of unmanned systems for war.”

And robotic warfare expert Peter Singer suggests the Predators and Reapers in use in the AfPak region are merely the first generation of such weapon systems — “equivalent to Ford’s Model T or the Wright Brothers’ Flyer”. The US military establishment is facing long-term budget cuts for the first time in a long time, and going by the Department of Defense’ latest Unmanned Systems Roadmap, this only seems to have increased the desire for automation, due to the increased efficiency and reduced costs.

A directive from the US Department of Defense, issued in the wake of the HRW report, mandated that “autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force.”

However, such directives are rather hard to swallow given the thousands of civilians killed by drones that were originally intended for surveillance purposes only. As Singer notes, “The human is certainly part of the decision making but mainly in the initial programming of the robot. ”

Check bad research At a UN meeting called to discuss Lethally Automated Weapon Systems in April this year, Michael W Meier, articulated the US position as “neither encouraging nor prohibiting the development of such systems”.

Which is exactly the kind of diplomatic pillow talk you deploy when you’re trying to gain a massive tactical advantage in the battlefield. The fact that most of the other major powers including Russia, China and the EU made similarly non-committal responses only serves to strengthen the American position.

A large slice of US foreign policy since World War II has been dedicated to stuffing the nuclear genie back into the bottle. If unmanned weapons research continues as is, worldwide proliferation is all but guaranteed, especially since the technology to reproduce these systems is trivial compared to getting a nuclear fission reaction going.

A ban on autonomous weapons, as proposed by the open letter, the HRW report and numerous other concerned parties, will take the edge out of research into automated weapons and prevent the creation of massive arsenals that could very easily fall into the wrong hands. After all, while the global effort to snuff out chemical and biological weapons may not have met with total success, no one can argue that the world is not a better place because of it.

Unfortunately, we are a significant terrorist attack away from taking that conversation seriously. Until then, we face the very realistic prospect of robots becoming evil long before they become sentient.

comment COMMENT NOW