Absent the kind of action campaigners are calling for, it is only a matter of time before fully autonomous armed robots are a lethal fixture in modern war zones and beyond.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” says Steve Goose, Arms Division director at Human Rights Watch, which is coordinating the campaign and released a 50-page report “Losing Humanity: The Case Against Killer Robots” last November. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”
Imagine if, rather than electing human representatives to Congress, we simply installed several hundred robots in their seats. Through a series of algorithms these HouseBots would simply translate opinion polls into votes on legislation. For those who are convinced that their elected officials become “tainted” as soon as they enter the Beltway, this might be a tempting prospect. We have replaced many factory workers with robots and waiters with food delivery apps. Why not outsource politics – and killing – to a machine?
But our intuition tells us there is something wrong with this idea. We want our leaders to be thinking subjects, people with moral fiber who challenge us rather than transmitting our worst conglomerated prejudices into legislation. Lawmaking draws on the nuances of tradition and precedent and requires politicians to weigh complex ethical dilemmas. It depends on people reasoning with each other, finding compromise and consensus.
Once laws are passed, their enforcement and adjudication depends equally on human beings. We are uncomfortable with the idea of automating justice. I do not trust an app to issue opinions in the Supreme Court. Habeus corpus requires a corpus and a trial by one’s peers requires human peers.
Our most important deliberations as a human community cannot be made on auto-pilot. So the decision to kill another human being cannot be left to a machine. The norms, wisdom and custom that underlie the laws of war do not translate neatly into binary code – they require human moral reasoning and judgment.
There are a few military and political voices that have reacted to the potential of a killer robot future with a passive, shoulder-shrugging fatalism. “It’s inevitable,” they say. But this misunderstands the social dimensions of technology. Societies have often decided against using technologies and we have developed strong international norms against using chemical, biological and nuclear weapons, blinding lasers, landmines and cluster munitions.
Late last year, the Department of Defense recognized the need for caution in deploying killer robots in policy directive 3000.09. But the directive, which is the first policy by any government in the world on this subject, is full of loopholes and allows the development and use of fully autonomous “nonlethal” armed robots.
“This policy shows that the United States shares our concern that fully autonomous weapons could endanger civilians in many ways,” said Goose in a recent statement on the DoD policy, “but it clearly leaves the door open to future acquisition and use of lethal fully autonomous weapons.”
No matter how ‘responsibly’ the US uses them, once autonomous armed robots are developed it will be easy for others to copy and deploy them. We already see the proliferation of drones among pariah states and extremist armed groups.
U.S. lawmakers must act quickly to protect us from the threat of killer robot proliferation by urging the president to transform a strengthened and loophole-free version of the DoD policy into national law. The president should use the policy to seek a comprehensive international ban on these weapons before it is too late.
Bolton is assistant professor of Political Science at Pace University in New York City. He is a member of the International Committee for Robot Arms Control and has worked in over a dozen countries, including in Iraq, Afghanistan, Bosnia, Sudan and South Sudan, as an aid worker, journalist and researcher.