Jesse Singal thinks naiveté is partly to blame for all the anger over Facebook’s emotion experiments:
On the one hand, it’s understandable to have a visceral reaction to secretly being the study of a psychological statement — yeah, there’s something creepy about this. But on the other, when you actually look at how Facebook’s news feed works, the anger is a bit of a strange response, to be honest. Facebook is always manipulating you — every time you log in. Your news feed is not some objective record of what your friends are posting that gives all of them equal “air time”; rather, it is shaped by Facebook’s algorithm in very specific ways to get you to click more so Facebook can make more money (for instance, you’ll probably see more posts from friends with whom you’ve interacted a fair amount on Facebook than someone you met once at a party and haven’t spoken with since).
So the folks who are outraged about Facebook’s complicity in this experiment seem to basically be arguing that it’s okay when Facebook manipulates their emotions to get them to click on stuff more, or for the sake of in-house experiments about how to make content “more engaging” (that is, to find out how to get them to click on stuff more), but not when that manipulation is done in service of a psychological experiment. And it’s not like Facebook was serving up users horribly graphic content in an attempt to drive them to the brink of insanity — it just tweaked which of their friends’ content (that is, people they had chosen to follow) was shown.
Michelle N. Meyer extensively explores the motivations of the researchers and comes away sympathetic to their work. One reason? Manipulation is part of life:
Even if you don’t buy that Facebook regularly manipulates users’ emotions (and recall, again, that it’s not clear that the experiment in fact did alter users’ emotions), other actors intentionally manipulate our emotions every day. Consider “fear appeals”—ads and other messages intended to shape the recipient’s behavior by making her feel a negative emotion (usually fear, but also sadness or distress). Examples include “scared straight” programs for youth warning of the dangers of alcohol, smoking, and drugs, and singer-songwriter Sarah McLachlan’s ASPCA animal cruelty donation appeal (which I cannot watch without becoming upset—YMMV—and there’s no way on earth I’m being dragged to the “emotional manipulation” that is, according to one critic, The Fault in Our Stars).
She does concede, however, that the study’s subjects should have been told they were involved rather than left to learn about it in blog posts like this one. Brian Keegan agrees with Meyer that this kind of ethical gray area is everywhere:
Somewhere deep in the fine print of every loyalty card’s terms of service or online account’s privacy policy is some language in which you consent to having this data used for “troubleshooting, data analysis, testing, research,” which is to say, you and your data can be subject to scientific observation and experimentation. Whether this consent is “informed” by the participant having a conscious understanding of implications and consequences is a very different question that I suspect few companies are prepared to defend. But why does a framing of “scientific research” seem so much more problematic than contributing to “user experience”? How is publishing the results of one A/B test worse than knowing nothing of the thousands of invisble tests? They reflect the same substantive ways of knowing “what works” through the same well-worn scientific methods.
He goes on to argue that Facebook and social scientists should be collaborating more, not less. Along those lines, Tal Yarkoni wonders if users realize how much they benefit from such research:
[B]y definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook. When you log onto Facebook, you’re not seeing a comprehensive list of everything your friends are doing, nor are you seeing a completely random subset of events. In the former case, you would be overwhelmed with information, and in the latter case, you’d get bored of Facebook very quickly. Instead, what you’re presented with is a carefully curated experience that is, from the outset, crafted in such a way as to create a more engaging experience (read: keeps you spending more time on the site, and coming back more often). …
[W]ithout controlled experimentation, the user experience on Facebook, Google, Twitter, etc. would probably be very, very different–and most likely less pleasant.
Alan Jacobs fumes over that post:
Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn’t be so widely used — for content that is provided wholly by its users.
Of course “every single change Facebook makes to the site alters the user experience” — but all changes are not ethically or substantively the same. Some manipulations are more extensive than others; changes in user experience can be made for many different reasons, some of which are better than others. That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do.
Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal, just as they would if they took their car to the body shop and got it back painted a different color. And in that latter case they would justifiably be upset even if the body shop pointed out that there was small print in the estimate form you signed permitting them to change the color of your car.
Elsewhere, law professor James Grimmelmann hopes the outrage will spark a wave of reform for how companies practice “invisible personalization”. To that end, Poniewozik insists that it’s time for Facebook to grow up:
Suppose The New York Times, or ABC News–or TIME magazine–had tweaked the content it displayed to hundreds of thousands of users to see if certain types of posts put readers in a certain frame of mind. The outcry would be swift and furious–brainwashing! mind control! this is how the biased media learns to manipulate us! It would be decried as not just creepy but professionally unethical. And it’s hard to imagine that the publication’s leadership could survive without promising it would never happen again.
He concludes that “as one of the biggest filters through which people now receive news[,] Facebook has as much ethical obligation to deliver that experience without hidden manipulation as does a newspaper.” This chart we posted recently drives that point home.