Colin Allen thinks [NYT] we need to consider it:
Isn't ethical governance for machines just problem-solving within constraints? If there’s fuzziness about the nature of those constraints, isn’t that a philosophical problem, not an engineering one? Besides, why look to human ethics to provide a gold standard for machines? My response is that if engineers leave it to philosophers to come up with theories that they can implement, they will have a long wait, but if philosophers leave it to engineers to implement something workable they will likely be disappointed by the outcome.
The challenge is to reconcile these two rather different ways of approaching the world, to yield better understanding of how interactions among people and contexts enable us, sometimes, to steer a reasonable course through the competing demands of our moral niche. The different kinds of rigor provided by philosophers and engineers are both needed to inform the construction of machines that, when embedded in well-designed systems of human-machine interaction, produce morally reasonable decisions even in situations where Asimov’s laws would produce deadlock.
Torie Bosch critiques Allen's example of autonomous getaway vehicles used for crimes:
Must any human interested in driving reveal not only the destination, but the plans at that destination, to her autonomous car, in case the driverless car might be unwittingly involved with a criminal scheme? The time between the release of a new technology and its adoption for malfeasance is historically short. Allen makes a strong case for building moral decision-making systems into AI and robotics. But with too many checks, such systems could potentially hobble new technologies as well.
Over the summer, Adam Keiper and Ari Schulman pondered other conundrums for friendly and "moral" machines:
Suppose one person holds a gun to the head of another, and his finger is squeezing the trigger. An armed robot is observing and has only a split second to act, with no technical solution available other than shooting the gunman. Either action or inaction will violate [theorist Eliezer] Yudkowsky’s principle of friendliness. One can easily imagine how the problem fundamentally shifts as one learns more about the situation: Suppose the gunman is a police officer; suppose the gunman claims that the intended victim is an imminent threat to others; suppose the intended victim is a scientist known to be a genius, who claims to have found the cure for cancer but has not yet shared the solution and has clearly gone mad; and so forth ad infinitum…. At the heart of the quest to create perfected moral beings is this blindness to the fact that dilemmas and hard choices are inherent to the lives of moral beings.



