Can You Teach A Robot Right From Wrong?

by Jonah Shepp

The Office of Naval Research is spending $7.5 million to find out:

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive. “Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said.

Since the robotic future of warfare has to some extent already arrived, and the danger of getting it wrong is so great, this seems worth the money to me, but Suderman doesn’t see how an ethical military robot is possible:

Obviously Asimov’s Three Laws wouldn’t work on a machine designed to kill. Would any moral or ethical system? It seems plausible that you could build in rules that work basically like the safety functions of many machines today, in which the specific conditions result in safety behaviors or shut down orders. But it’s hard to imagine, say, an attack drone with an ethical system that allows it to make decisions about right and wrong in a battlefield context.

What would that even look like? Programming problems aside, the moral calculus involved in [waging] war is too murky and too widely disputed to install in a machine. You can’t even get people to come to any sort of agreement on the morality of using drones for targeted killing today, when they are almost entirely human controlled. An artificial intelligence designed to do the same thing would just muddy the moral waters even further. Indeed, it’s hard to imagine even a non-lethal military robot with a meaningful moral mental system, especially if we’re pushing into the realm of artificial intelligence.

Meghan Neal entertains the argument that killer robots might actually be more ethical than human soldiers:

For one, killer bots won’t be hindered by trying not to die, and will have all kinds of superhero-esque capabilities we can program into machines. But the more salient point is that lethal robots could actually be more “humane” than humans in combat because of the distinctly human quality the mechanical warfighters lack: emotions.

Without judgment clouded by fear, rage, revenge, and the horrors of war that toy with the human psyche, an intelligent machine could avoid emotion-driven error, and limit the atrocities humans have committed in wartime over and over through history, [roboethicist Ronald] Arkin argues.  “I believe that simply being human is the weakest point in the kill chain, i.e., our biology works against us,” Arkin wrote in a paper titled “Lethal Autonomous Systems and the Plight of the Non-combatant.”

But, of course, as Zack Beauchamp points out, that same lack of emotion prevents a robot from disobeying orders to commit an atrocity:

Charli Carpenter, a political scientist at the University of Massachussetts-Amherst, makes a compelling argument that robots could commit war crimes — because war crimes, contrary to what we might prefer to believe, are often not committed by rogue soldiers as crimes of passion but as deliberate tools of terror engineered by top commanders. In the Bosnian War, for example, Bosnian Serb soldiers were ordered by their commanders to use rape as a tool of terror, and soldiers who refused were threatened with castration.

Robots, unlike people, always do what they’re told. Carpenter’s point is that human-rights abusing governments could program robot warriors to do whatever they’d want, and they’d do it, without compunction or thought. If the reality of war-time atrocities is that they tend to be intentional, not crimes of passion, then that’s a huge count in favor of banning military robots today.

Filip Spagnoli engages both sides of the moral dilemma:

It’s true that robots can be programmed to kill indiscriminately or to kill all brown people. But history is full of human commanders giving exactly the same kind of orders. If robots are programmed in immoral ways, then that’s an easier problem to solve than the prejudices or emotional failures of scores of individual soldiers and commanders. Of course we’ll have to monitor the people who will program the robots. But is this more difficult than monitoring the immoral orders by human leaders? Obviously it’s not. It’s true that monitoring will be easier in democracies, but if dictators want killer robots there’s not a lot we can do to stop them or to convince them to use robots in a ethical manner.