Killer Robot Drones are Like Drugs: Regulate, but Resist the Urge to Ban Them (Op-Ed)
This article was originally published at The Conversation. The publication contributed the article to Live Science's Expert Voices: Op-Ed & Insights.
BAE Systems has revealed that it has successfully test-flown Taranis, its prototype Unmanned Aerial Vehicle.
The test has some people understandably hot under the collar. But while there is much to debate on the detail, the answer to the biggest question of all, whether or not we should ban drones, is unequivocal. We should not. Like effective but dangerous drugs, the answer is not to ban them. It’s to subject their development to rigorous testing and regulation.
BAE’s video footage shows a sleek boomerang-shaped blade cruising sedately over the Australian outback. Taranis is a stealth aircraft, designed to evade radar. It is pilotless, meaning it can manoeuvre in ways that would cause a human to black out if they were on board. And crucially, it’s a step on the way to drones that can make autonomous targeting decisions. More bluntly, it’s a step towards killer robots taking to the sky.
It’s not difficult to see why the idea of killer robots causes alarm. Some worry that these machines won’t be able to distinguish reliably between soldiers and civilians and will end up killing innocents. Others imagine Terminator-style wars between robots and people.
Philosophers get in on the act too, arguing that enabling machines to decide who to kill is a fundamental breach of the conditions of just war. For it is unclear who should be held responsible when things go wrong and a drone kills the wrong targets. It can’t be the dumb robot. Nor can it be the soldier who sends it to battle, because he or she only decides whether to use it, not what it’s going to do. It can’t be the designers, because the whole point is that they have created a system able to make autonomous choices about what to target.
This is all smoke and mirrors. The anti-killer-robot campaigners are right when they say now is the time to debate whether this technology is forbidden fruit, better for all if left untouched. They are also right to worry whether killer robots will observe the laws of war. There is no question that killer robots should not be deployed unless they observe those laws with at least the same (sadly inconsistent) reliability as soldiers. But there is no mystery as to how we will achieve that reliability and with it resolve how to ascribe moral responsibility.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
There is an analogy here with medicines. Their effects are generally predictable, but a risk of unpleasant side-effects remains. So we cautiously test new drugs during development and only then license them for prescription. When prescribed in accordance with the guidelines, we don’t hold doctors, drug companies, or the drugs to account for any bad side-effects that might occur. Rather, the body which approves the medicine is responsible for ensuring overall beneficial outcomes.
So too with killer robots. What we need is a thorough regulatory process. This will test their capabilities and allow them to be deployed only when they reliably observe the laws of war.
Tom Simpson does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
This article was originally published on The Conversation. Read the original article. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.