When Robots Appeal To Our Emotions

Inspired by the the famous Milgram obedience study, robotics professor Christoph Bartneck set out to test whether humans would hesitate to shut down an anthropomorphized computer, with the results seen in the video above. Alix Spiegel revisits the experiment’s findings:

At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram’s would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: “They would essentially eliminate everything that the robot was — all of its memories, all of its behavior, all of its personality would be gone forever.”

In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. “You are not really going to switch me off, are you?” the cat robot begs, and the humans sit, confused and hesitating. “Yes. No. I will switch you off!” one female research subject says, and then doesn’t switch the robot off.

Relatedly, Scott Adams predicts that in the future “robots will be so human-like that the idea of decommissioning one permanently will literally feel like murder”:

I assume that robots of the future will have some form of self-preservation programming to keep them out of trouble. That self-preservation code might include many useful skill sets such as verbal persuasion – a skill at which robots would be exceptional, having consumed every book ever written on the subject. A robot at risk of being shut down would be able to argue his case all the way to the Supreme Court, perhaps with a human lawyer assisting to keep it all legal.A robot of the future might learn to beg, plead, bargain, and manipulate to keep itself in operation. The robot’s programming would allow it to do anything within its power – so long as it was also legal and ethical – to maintain its operational status.