
And battlefield robots could impartially monitor and report the ethical behavior of all parties on the battlefield.

They could also evaluate information much faster and from more sources than human soldiers before responding with lethal force. They could objectively weigh information and avoid confirmation bias when making targeting and firing decisions. Not burdened with emotions, autonomous weapons would avoid the moral snares of anger and frustration. Since self-preservation would not be their foremost drive, they would refrain from firing in uncertain situations. Under the pressure of battle, fear, panic, rage, and vengeance can overwhelm the moral sensibilities of soldiers the result, all too often, is an atrocity. War robots would be no more moral agents than self-driving cars, yet they may well offer significant benefits, such as better protecting civilians stuck in and around battle zones.īut can killer robots be expected to obey fundamental legal and ethical principles as well as human soldiers do? The Georgia Tech roboticist Ronald Arkin turns this issue on its head, arguing that lethal autonomous weapon systems "will potentially be capable of performing more ethically on the battlefield than are human soldiers." While human soldiers are moral agents possessed of consciences, they are also flawed people engaged in the most intense and unforgiving forms of aggression.

"What matters morally is the ability consistently to behave in a certain way and to a specified level of performance," argue Anderson and Waxman. That seems like a significant moral and practical benefit. But deploying autonomous vehicles could reduce the carnage of traffic accidents by as much as 90 percent. Self-driving cars will have to choose what courses of action to take when a collision is imminent-e.g., to protect their occupants or to minimize all casualties. But does that really matter? "Moral" decision-making by machines will also occur in non-lethal contexts. When deciding whether to pull the trigger, a soldier consults his conscience and moral precepts a robot has no conscience or moral instincts. To these objections, law professors Kenneth Anderson of American University and Matthew Waxman of Columbia respond that an outright ban "trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it."Ĭhoosing whether to kill a human being is the archetype of a moral decision. And the fourth is that, since deploying killer robots removes human soldiers from risk and reduces harm to civilians, they make war more likely. The third is that autonomous weapons cannot be held morally accountable for their actions. The second is that it will simply be impossible to instill fundamental legal and ethical principles into machines in such a way as to comply adequately with the laws of war. The first is that it is just morally wrong to delegate life and death decisions to machines.

Who wants to encounter the Terminator on the battlefield? Proponents of a ban offer four big arguments. The group is scheduled to meet again in April 2015.Īt first blush, it might seem only sensible to ban remorseless automated killing machines. In its 2012 report, Losing Humanity: The Case Against Killer Robots, the activist group demanded that the nations of the world "prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument." Similarly, the robotics and ethics specialists who founded the International Committee on Robot Arms Control wants "a legally binding treaty to prohibit the development, testing, production and use of autonomous weapon systems in all circumstances." Several international organizations have launched the Campaign to Stop Killer Robots to push for such a global ban, and multilateral meeting under the Convention on Conventional Weapons was held in Geneva, Switzerland last year to debate the technical, ethical, and legal implications of autonomous weapons. Are the ethical and legal problems that such "killer robots" pose so fraught that their development must be banned? Lethal autonomous weapons systems that can select and engage targets do not yet exist, but they are being developed.
