6/30/2014

Facebook deliberately made people sad. This ought to be the final straw




Manipulating users' emotions is not the act of a socially responsible company, and we should not tolerate it For one week in January 2012, Facebook deliberately made about 155,000 people sad, just to see if it could.


Stated that bluntly, it's not hard to see why the company's study, which was published in the prestigious PNAS journal on 17 June, has elicited such a strong negative reaction.


To be fair to the firm, it also made a similar number of people happy. The study involved examining Facebook profiles for evidence of "emotional contagion", trying to find out whether being exposed to specific emotions invokes that mood in readers. For some, that meant 90% of all "positive" posts were removed from their newsfeed for a week, rendering the social network a pit of despair. Even more so than normal.


Adam Kramer, Facebook's data scientist who led the research, explained the company's motivations in a subsequent post on his Facebook account. "We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out," he wrote. "At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook."


Facebook's official response to the anger was odd, and strangely tangential to the matter at hand. "This research was conducted for a single week in 2012 and none of the data used was associated with a specific person's Facebook account," it said in a statement to Forbes' Kashmir Hill. "There is no unnecessary collection of people's data in connection with these research initiatives and all data is stored securely." But, for once, the discomfort with what Facebook is doing isn't about its ongoing war with Google over who can know the most about our lives.


Instead, it's about … well, what? It's weirdly hard to put a finger on what, specifically, is upsetting about the study. For some, particularly fellow researchers, the issue is the failure to garner informed consent from the nearly 700,000 subjects.


But the issue of consent also doesn't quite explain why we're comfortable with some types of uninformed research on us, but not others. Like almost every major tech firm, Facebook practices A/B testing, a design practice that involves changing the site for some proportion of its visitors in order to gauge their responses. Google famously A/B tests nearly every aspect of its product, right down to which shade of blue works best for adverts. There's no consent given in these cases, yet criticism is rarely voiced. So what's the material difference here?


Part of it may be in the company's motivations. At least when a multinational company, which knows everything about us and controls the very means of communication with our loved ones, acts to try and maximise its profit, it's predictable. There's something altogether unsettling about the possibility that Facebook experiments on its users out of little more than curiosity.


The issue also comes with what was being manipulated. In most A/B tests, the dependent variable (the thing the study is trying to affect) is something like click rates, or time on page. In this case, it was the emotion of the users.


That's already creepy, but given the size of the study, it's near-certain that some of the people involved suffered from, say, depression. Deliberately attempting to negatively affect the emotion of people with mental illness is not typically considered an OK thing to do. And when you look to the future, it gets even scarier.


The Facebook news feed is already parcelled up to advertisers in terrifyingly small demographic portions. Want to sell only to gay women between the ages of 45 and 49 who have liked the Oxford Conservative Association page? You can. Facebook's research offers the spectre of a company paying to only advertise to people already in a bad mood – or even a company paying to improve the mood.


And yet, even manipulating emotions gets a pass in other situations. From TV news to political speeches, and, of course, advertisements, organisations have been trying to do it for years, and largely succeeding.


I think what we're feeling, as we contemplate being rats in Facebook's lab, is the deep unease that what we think of as little more than a communications medium is actually something far more powerful. Facebook has so far successfully presented an image of neutrality, of not having an agenda or a viewpoint, that we think of it as something similar to Royal Mail or BT. Even before the study was published, that edifice was crumbling. Your mail doesn't get binned if the postman thinks it's boring.


But with this study, Facebook has inadvertently highlighted, to a mostly uncaring community, just how actively it interferes. Up until now, it's mostly shown off the times when it's done so with beneficent aims: promoting organ donors, or voters. Now, that mask has dropped.


For me, enough was enough even before this weekend. I quit the social network at the end of February, tired of a service that seemed increasingly oriented away from making it easy to talk to friends and family, towards maximising eyeballs on adverts. My hope is that now, others will be prompted to join me. But each time more evidence surfaces of the network's fundamental amorality, and each time outrage flares up only to be forgotten a week later, that hope sinks a little further.

0 comments:

Post a Comment

Grace A Comment!