The ethics of a study conducted by Facebook in which users’ emotions were manipulated by targeted News Feed showings are being called into question.
In a single week in January of 2012, Facebook News Feed’s algorithm was manipulated so that roughly 700,000 Facebook users and their 3 million posts could be examined to see if they tended positive or negative. Once that was determined, Facebook’s data team changed the amount of positive or negative Facebook language seen on these users’ News Feeds to examine the users’ reactions.
The idea of the study was to ascertain the impact of verbal and textual cues on emotions. The study stated, “These results suggest that the emotions expressed by friends, via online social networks, influence our own moods.”
The researchers, Adam Kramer of Facebook; Jamie Guillory of the University of California, San Francisco; and Jeffrey Hancock, of Cornell University, published their results this month in the Proceedings of the National Academy of Sciences of America.
Criticism of the study is based on the fact that the users were unaware they were being studied and had not given their consent. In studies conducted by universities and other institutions which receive federal funding, the process entails answering to institutional review boards (IRBs), which in turn derive their judgments from ethical standards set such as the Common Rule. One of the pillars of the Common Rule is that subjects must consent to be included in an experiment.
Susan Fiske, the editor of the study and a professor of psychology at Princeton University, acknowledged to The Atlantic, “People are supposed to be, under most circumstances, told that they’re going to be participants in research and then agree to it and have the option not to agree to it without penalty.” Fiske told The Atlantic that she had some worries that the study may have crossed the line, saying, “I was concerned, until I queried the authors and they said their local institutional review board had approved it–and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time… I understand why people have concerns. I think their beef is with Facebook, really, not the research.”
Facebook’s privacy policy allows the company to conduct such studies once a user has consented to use the website.
Jamie Guillory would not elaborate to The Atlantic on the subject, instead asserting that Facebook preferred to deal with the matter. A Facebook spokesman did respond to The Atlantic‘s inquiries by stating, “We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives, and all data is stored securely.”
Fiske agreed, citing the fact that unlike universities and federal agencies, Facebook is not financially supported by the federal government. She said:
A lot of the regulation of research ethics hinges on government supported research, and of course Facebook’s research is not government supported, so they’re not obligated by any laws or regulations to abide by the standards. But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It’s voluntary. You could imagine if you were a drug company, you’d want to be able to say you’d done the research ethically because the backlash would be just huge otherwise.
The criticism toward Facebook has been predicated not only on the website’s manipulating people but on the feeling that Facebook’s attitude toward the work is disturbingly callous.
Fiske admitted:
I think part of what’s disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people… Who knows what other research they’re doing.
Yet she has mixed feelings about the research gleaned from the study, calling it “inventive and useful.” She concluded:
I don’t think the originality of the research should be lost. So, I think it’s an open ethical question. It’s ethically okay from the regulations perspective, but ethics are kind of social decisions. There’s not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn’t have been done… I’m still thinking about it and I’m a little creeped out, too.