Killing Machines

Nov 27 2012 @ 8:44pm

Last week, Human Rights Watch warned about "fully autonomous weapons, which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians." Spencer Ackerman sizes up the military's current technological limitations:

It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.

Gary Marcus contemplates machine morality more generally:

An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip; an automated car that aimed to minimize harm would never leave the driveway. Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire. A tiny cadre of brave-hearted souls at OxfordYale, and the Berkeley California Singularity Institute are working on these problems, but the annual amount of money being spent on developing machine morality is tiny.