Alex Mayyasi flags some fascinating research on how we interact with computers:
[Professors Clifford Nass and Youngme Moon of Stanford and Harvard] found that people were too polite to give honest feedback in the form of an on screen evaluation to a mediocrely helpful computer. But when they evaluated the computer’s helpfulness on another computer, people proved as forthcoming as students privately complaining about their terrible teacher. And just as we will generally go to greater lengths to help people that have helped us, Nass and Moon found that participants asked to “help” a computer match a color palette with human perception spent much more time doing so with computers that had provided helpful search responses than computers that returned bad search results.
We don’t treat computers exactly like humans. We don’t cry when they die (usually) and we don’t excuse ourselves when we suddenly leave our computers to grab a coffee. But to a surprising extent, we do apply rules and expectations from the social world to interactions with computers.