“Lethal autonomous weapons systems demand careful consideration but nightmare scenarios of the future won’t become reality anytime soon, says a UNSW Canberra military ethicist.
The term ‘killer robots’ conjures up images of sci-fi scenarios where wars are being fought by Terminator-like soldiers, but according to UNSW Canberra military ethicist Deane-Peter Baker, it’s not quite that scary or cinematic.
In fact, killer robots, or lethal autonomous weapons systems (LAWS), may actually save lives on the battlefield.
Associate Professor Baker’s latest book, Should we Ban Killer Robots?, draws from his experience on the International Panel on the Regulation of Autonomous Weapons (IPRAW) from 2017-19.
IPRAW is an international network of researchers tasked with providing non-partisan guidance to the national delegations engaged in the UN debate over whether or not to ban or regulate LAWS.
“This book is my attempt to pull together my views on this topic, which have emerged from my time as an IPRAW panellist and other subsequent policy-focused work on this topic,” A/Prof. Baker said.
He explained that there are two main arguments for banning LAWS. One focuses on the potential consequences of allowing LAWS to be used in war.
“For example, opponents are concerned that LAWS won’t be capable of operating within the boundaries of the law of armed conflict,” A/Prof. Baker said.
“The worry here is that they will use force in an indiscriminate or disproportionate manner.
“The other main type of argument is that, consequences aside, it’s simply fundamentally wrong to allow a machine to make the choice to kill a human being.”
According to A/Prof. Baker, less developed states tend to be in favour of a ban, while powerful and technologically advanced states are not particularly supportive.
“Proponents of LAWS argue that these systems can save lives in a number of ways,” he said.
“For example, there is the claim that robots can be sent to do ‘dull, dangerous and dirty’ jobs without having to risk a human soldier, sailor or aviator – far better for a machine to get destroyed than for a member of the armed forces to be killed or maimed.
‘The other main type of argument is that, consequences aside, it’s simply fundamentally wrong to allow a machine to make the choice to kill a human being.’
“They also argue that LAWS will be less prone to using indiscriminate force, because they don’t get scared, angry or confused in the way that human combatants can in the midst of combat.”
A/Prof. Baker said there is also the argument that an international ban will not prevent malign actors from developing and using these systems, so we should not hand them a significant operational advantage by adopting a ban and disallowing ourselves from using them.
So, will we find ourselves in that Terminator situation any time soon?
“We’re a long way from that happening, if indeed it ever does!” A/Prof. Baker assured.
“I think there’s no doubt that we will start to see more and more lethal autonomous weapons participating in wars – the UN believes we have already seen the first humans to be killed by autonomous weapons, in the Yemen conflict. But it’s my view that they will be unlikely to play much more than a supplementary role for some time to come.”
In the medium term, he said highly sophisticated systems will be very expensive and therefore rare, while simple autonomous systems will be constrained by limited capability.
“Over the longer term we will start to see more sophisticated systems becoming more affordable and therefore more prolific, and the simpler systems will themselves become more capable,” A/Prof. Baker said.
He hopes that readers of the book will come away with a clearer understanding of the arguments that have been raised in favour of a ban on killer robots.
“Even if they don’t agree with my conclusion, hopefully their thinking will have been challenged and their views sharpened in the process.””