We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
In short, one member of Facebook’s own Core Data Science team and two university researchers from Cornell and UCSF got together to analyze and manipulate people’s Facebook news feed. They used a software to count whether news feed items contained mainly emotionally positive or negative words. For a week, they tweaked the news feed algorithm to show less of these emotionally charged posts: One study group saw less positive items, the other less negative ones. People who saw less posts of one emotion subsequently posted less posts with that emotion themselves. This, the authors of the study argue, demonstrates that emotions can be transferred between people online. Others counter that the effect is so small to be an effectively irrelevant statistical blip.
At least in my online social circles, it’s hard to keep up with the debate, playing out (with no little irony) chiefly on Facebook discussion threads. Standard reactions are split: among scholars I observe (a) disturbance and disgust, (b) concerns over communication power, or (c) complaints that the “sexy” large Facebook data set (690,000!) led the authors to oversell a non-result, and the journal to publish a study with methodological issues. Reactions in non-scholarly circles are usually veering between “mmmmm-creepy" and "what’s the fuss?", expressed e.g. by venture capitalist Marc Andreessen:
This split reaction I think shows a clash of different ways the story/study is framed, which points to the larger issue how we should frame, evaluate, and regulate private entities engaging in scientific research – and even more fundamentally, how to frame and regulate digital entrants to existing social fields. But before we get to that, for the non-academics, let’s quickly review the facts what exactly makes academics irate about the study.
The study intentionally tried to manipulate people’s emotional states, which for academic researchers constitutes “human subject research”: researchers manipulating human participants to see how they respond. Incidents like the infamous 1963 Milgram experiment led the research community to regulate this kind of experiments with specific codes of ethics and procedures, most notably the US Federal Policy for the Protection of Human Subjects (aka “Common Rule”), which requires that all US federally funded research be reviewed in advance by a so-called Institutional Review Board (IRB) that ensures that risks to participating subjects are minimized, are reasonable in relation to anticipated benefits, and that subjects have given appropriate informed consent.
From Cornell University’s own press story, we know that the Facebook study was at least partially funded by a US governmental body, the Army Research Office, and thus beholden to the Common Rule. This immediately raises three questions:
1. Has the study been reviewed and pre-approved by an IRB?
2. Have subjects given informed consent?
3. Were risks to subjects minimized and reasonable in relation to anticipated benefits?
The PNAS article on its own is not very clear on these matters. Consequently, a lot of online heat concerned these questions. So let’s review them in turn.
Update (June 30): The Cornell press story added a correction on June 29, stating:
An earlier version of this story reported that the study was funded in part by the James S. McDonnell Foundation and the Army Research Office. In fact, the study received no external funding.
That Cornell’s press office feels the need to publish this correction is interesting on its own, but it doesn’t change the fact that the published study is subject to the Common Rule: According to Cornell’s own policies, “all research activities that involve the collection of information through intervention, interaction with, or observation of individuals, or the collection or use of private information about individuals, must be evaluated to determine whether they constitute human participant research, and the type of review required before the research activities can begin”. And as David Gorski notes, the journal guidelines of the PNAS itself require that
– which is similar in demands to the Common Rule. (BTW, the authors failed to specify in the article’s method section what institutional committee approved the experiment.) And anyhow, it is considered to be the normal and ethical thing to do in academic circles, no matter what your funding source or explicit policy.
1. Has the study been reviewed and pre-approved by an IRB?
According to an interview with the PNAS editor of the article, Susan Fiske, yes.
Update (June 30): According to a Forbes source, the data collection itself was only reviewed internally by Facebook. According to a later e-mail by Susan Fiske, the analysis of the data set was reviewed and pre-approved by a university IRB (see below).
2. Have subjects given informed consent?
The article itself argues that agreeing to Facebook’s use policy equals informed consent:
“Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”
When users first sign up, they consent to Facebook’s “Data Use Policy”, which at one point in its 9,405 words states that Facebook may use people’s information “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Both the editor of PNAS and the involved IRB apparently accepted this logic, based on the rationale that Facebook is already exposing its users to tweaked News Feed algorithms. Says Fiske:
“I was concerned […] until I queried the authors and they said their local institutional review board had approved it — and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”
Informed consent as set out by the Common Rule involves “legally effective informed consent of the subject […] under circumstances that provide […] sufficient opportunity to consider whether or not to participate […] in language understandable to the subject”, and at least:
“(1) A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any procedures which are experimental;
(2) A description of any reasonably foreseeable risks or discomforts to the subject;
(3) A description of any benefits to the subject or to others which may reasonably be expected from the research;
(4) A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject;
(5) A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained;
(6) For research involving more than minimal risk, an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained;
(7) An explanation of whom to contact for answers to pertinent questions about the research and research subjects’ rights, and whom to contact in the event of a research-related injury to the subject; and
(8) A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.”
Academics like James Grimmelman think the study doesn’t meet these criteria, and I agree (and also recommend reading his full post): Saying “yes” to a number of obnoxiously long and likely unread Terms of Service and Use Policy docs allowing any and all potential future research studies arguably does not provide understandable language and sufficient opportunity to ponder whether to participate in this specific study. Participants did not learn about the existence and purpose of the study (1), forseeable risks (2) or benefits (3), let alone did they have any opportunity to refuse or discontinue participation, or were made aware of this opportunity (8). (If you say “just don’t use Facebook”, that’s arguably a significant “loss of benefits”, given the status of Facebook as a quasi-public sphere these days – people couldn’t cease participating in this experiment without also ceasing to use Facebook as a whole.)
Could the study be exempt from requiring informed consent? Remember that the paper itself claims it did acquire informed consent, so the question is moot. Had it not done so, the typical criteria for exemption are that
“(1) The research involves no more than minimal risk to the subjects;
(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
(3) The research could not practicably be carried out without the waiver or alteration; and
(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.”
Even if one holds that altering the total emotional tone of displayed news feed items constitutes only “minimal risk” (see below), that waiving consent wouldn’t affect the subjects negatively, and that the study couldn’t be conducted any way else, this would still arguably require that participants learn that they participated in a study after the fact (4), which as far as we know did not happen.
The only other possibility for exemption I see would be that the study involved the retrospective analysis of an already-existing data set (along the lines of Fiske’s “Facebook apparently manipulates people’s News Feeds all the time”). The paper itself suggests this by stating:
“Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging.”
Thus, the study could have gotten around required informed consent if Facebook had already done the study solely on its own as part of its constant service tinkering, and after the fact, the researchers from Cornell and UCSF came along and said: “Gee, we could use the data you generated for a nice journal paper.” But the “Author Contribution” section of the paper indicates that the university researchers were involved in designing the research before the study was performed.
Update (June 30): According to the mentioned email by Susan Fiske, this analysis of a pre-existing data set is exactly what happened: ”Their [the authors’] revision letter said their had Cornell IRB approval as a ‘pre-existing data set’ presumably from Facebook”, she writes.
This only reinforces the question how the data set could already exist if the very design of the study (according to the article’s author contribution section) was already done by the university researchers. One can give the authors the benefit of the doubt that they designed the analysis of a data set, not how to generate it, but this needs clarification. And be that as it may, they damned themselves by expressly claiming that they were not exempt from informed consent, but obtained it via users’ agreement to Facebook’s Data Use Policy.
3. Were risks to subjects minimized and reasonable in relation to anticipated benefits?
This is a matter of debate. Given how small the measured effect was, does exposing users to News Feed items with more total or negative sentiment present more than “minimal risk” to their well-being? The main point I have seen raised is the following: Mood disorders like depression are widespread – according to the NIMH, they currently affect 9.5% of all US Americans. With Facebook’s vast user base, the experiment likely touched a sizeable number of people suffering from a mood disorder – constituting what the Common Rule calls “vulnerable populations”, which require extra care and safeguards. Unwittingly exposing depressive people to content of more negative mood, with no chance to opt out, arguably might present more than minimal risk to a vulnerable population.
Update (June 30): Adam Kramer, one of the authors, responded to public reactions:
"And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week."
This implies they tried to create an intentionally very small intervention. And as Tal Yarkoni has pointed out, posting different words and actually feeling differently are two different things. Tweets like “I wonder if Facebook KILLED anyone with their emotion manipulation stunt. At their scale and with depressed people out there, it’s possible.” are certainly overwrought.
However, whether the actual effect was negligible in the end is not as relevant as whether the authors and IRB could reliably predict that it will be negligible. IRBs typically operate on a “better safe than sorry” precautionary principle.
In sum, the authors of the study and the Cornell IRB board thought the study appropriately minimized risks and involved informed consent, their journal editor thought it prudent to “not second-guess the relevant IRB”. The academics who are getting irate over the study disagree (and for what it’s worth, I find myself in the latter camp, mainly because the authors claim they got informed consent through people’s agreement to the Facebook Data Use Policy – though see Michelle N. Meyer’s useful analysis, which comes down in favor of the authors).
As noted, the interesting thing for me is how differently people evaluate the study. The public comments on MetaFilter are quite exemplary. On the one side, the academic camp:
“I don’t remember volunteering to participate in this. If I had tried this in graduate school, the institutional review board would have crapped bricks.” (mecran01)
On the other side, the “online business as usual” camp, arguing (like Fiske) that this is what Facebook et al. are doing all the time with/for targeted advertising anyhow:
“I am shocked. SHOCKED.
Does anyone actually pretend Facebook does anything for altruistic purposes?
Standard reminder: if something is free to you, you are not the customer. You are the product.” (dry white toast)
“The advertising algorithm is basically doing the same thing - trying to alter our mental state with regards to certain products. Now they’ve shown (probably obvious in retrospect ) that they can do it by manipulating what we see, without injecting ads into our stream. So what happens if the Republicans (or Democrats, pick your poison) throw silly money at FB in an exclusive contract to push us left or right in our voting tendencies?” (COD)
The following tweet by Chris Dixon maybe expresses this clash best:
I have participated in many a discussion with scholars half-jokingly complaining that they have to go through lengthy IRB reviews for things every private individual or company do every day without even thinking about it. Recovering communication researcher and budding sociologist, this strikes me as a clashing of frames – different ways of understanding and evaluating events by ascribing them to different types of situations. Let me explain.
Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.
Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.
If anything, people who frame the Facebook study as “A/B testing as usual” might be disturbed by the fact that the algorithm was tweaked to directly affect emotions, which is (still) unexpected in this frame. Hence the assumption in responses on MetaFilter and elsewhere that Facebook does so in order to create more effective ads in the end, which is an expected, usual purpose of A/B testing (insert creepy conspiracy theory about retail therapy and loneliness loops here).
Along these lines, Marc Andreessen framed “online social networks” as “communication in general”, which always has an emotion-affecting component:
Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!
— Marc Andreessen (@pmarca) June 28, 2014
To which Martin Bryant responds on The Next Web:
Andreessen argues that TV shows, books, newspapers and indeed other people deliberately affect our emotions.
The difference, though, is that we willingly enter into these experiences knowing that we’ll be frightened by a horror movie, uplifted by a feel-good TV series, upset by a moving novel, angered by a political commentator or persuaded by a stranger. These are all situations we’re familiar with – algorithms are newer territory, and in many cases we may not know we’re even subjected to them.
We are used to emotional appeals in “face to face communication” or “advertising” frames. We are used to (and willingly expose ourselves to) emotional effects in “fictional media consumption” frames. But that “online social networks” intentionally affect emotions through algorithms is new, unusual to this frame and therefore feels “manipulative”, “creepy” to some.
These differences in framing not only show in whether (or how much) people find the study problematic, but also in what they take issue with. In online social networks, the standard “problem” is privacy and data protection: simply put, who is exposed to your personal information. In scientific experiments, just as important as privacy is what information you are exposed to (as an “experimental stimulus”) with what potential harmful effects, and whether you consented to that.
The authors of the study themselves seem to appeal to the “online social network” frame when they state that the study was ethically unproblematic because the individual news feed items were all handled by software “such that no text was seen by the researchers.” The ethical issue they address is privacy, not harmful effects. Same when a Facebook spokesperson defended the study stating “none of the data used was associated with a specific person’s Facebook account”: privacy, not harmful effects.
Similarly, most news stories and blog entries chiefly highlight the standard online social network issues of (a) communication power and (b) filter bubbles: Manipulations like these show how much power online companies like Facebook have over us, and filtering information by sentiment could keep us in a Huxleyan SNAFU bubble.
In sum, this study stirs such split reactions, heated emotions, and cognitive dissonance (how can the same A/B test be a/okay in business and bad in science?) because it presents something that mixes and breaks frame expectations of different communities: Facebook itself, the study authors, and people like Andreessen frame it as the next iteration of A/B testing and ad targeting – “no biggie”. Others find it breaks their “online social network” frame expectations and feel “creeped out”. Academic researchers frame it as “scientific experiment” and cry “unethical”. It’s interesting that online news media like The Atlantic and Slate picked up the academic “scientific experiment” frame to give words to the feeling that online emotion manipulation is manipulative in an uncanny way (a framing first established by Animal New York and A.V. Club). Before these two online media gave the story the “creepy” spin, most reporting was chiefly of the factual, newsy “isn’t this interesting” variety, focusing on the findings: “News feed: ‘Emotional contagion’ sweeps Facebook”, titles the Cornell Chronicle on June 10, “Even online, emotions can be contagious”, we read on the New Scientist on June 26.
This clashing, overlapping, and breaking of frames demonstrates once more how digital networked media break down existing social barriers. For a long time, experiments manipulating people’s psychological states were only done in research institutions by academic or industry researchers socialized in academic research norms and practices. With the pervasive shift of social interaction onto digital, networked platforms, and the rise of easy A/B testing of all elements of these platforms, more and more individuals with no socialization or training in experimental practices and norms get to engage in massive de facto human subject experiments, employed by organizations (like Facebook) that do not fall under the purview of existing laws and regulations for human subject research.
This was principally already an issue when businesses started doing market research, or when, more recently, software companies started hiring usability engineers and user researchers. But market and user researchers were typically recruited from academic research backgrounds (e.g. in sociology, psychology, human-computer interaction), and so they brought their norms and practices with them: You’d be hard-pressed to find a “traditional” market research or usability agency that doesn’t use some form of gathering informed consent as part of their surveys or interviews.
With tools like Google Website Optimizer or Facebook ad campaigns, the capacity to run de facto experiments is mass-democratized to social media editors, product managers, software developers, and basically everyone who runs a website. This is new. It doesn’t match our socially shared, institutionalized frames and connected norms. And so we freak and fight.
More generally, digital networked technologies allow new entities to perform actions and assume social functions that were previously limited to a pre-defined set of existing entities: Online companies become de facto research institutions, not beholden (or so they claim) to the norms, laws, and regulations of academic research. Facebook and Google become de facto news publishers, not beholden (or so they claim) to the norms, laws, and regulations of the news industry. Uber and Lift become de facto taxi providers, not beholden (or so they claim) to the norms, laws, and regulations of the taxi industry. YouTube et al. become de facto television channels, not beholden (or so they claim) to the norms, laws, and regulations of broadcasting. Etc. etc.
The manifest risk in all these instances is that these new, digital networked entrants undermine and circumnavigate hard-won public accords enshrined in laws, regulations, and norms of communities of practice, under the ruse that “new technology” somehow means that “old rules don’t apply”.
The Facebook study is especially interesting and complicated because it was conducted by both such new entrants (Facebook) and existing actors (two university researchers). But if we take the latter two away, the more important question shines through: Do we, as a public, want companies like Facebook to be able to do large scale human subject research outside the regulatory and normative framework that academia has developed? What kind of norms and regulations do we want for new practices like A/B testing and the power it entails? How can we safeguard that large-scale, fine-grained human subject research – both by corporate entities and individuals – does not harm the individual and public good? What old rules still apply, new technology or not?
And how to effectively enforce such norms? Michelle N. Meyer thinks we should in the opposite loosen academic research control to enable it to become a critical watchdog: ”so long as we allow private entities freely to engage in these practices, we ought not unduly restrain academics trying to determine their effects.” Brian Keegan suggests researchers should work more not less with companies like Facebook if one wants them to be “deeply embedded within and responsible to the broader research community”. But are academics really able to do so on their own, given the power asymmetries between them and Internet companies? What ethical accord we as a public want, and how to enforce it – this is the discussion we need to have.
Update (June 29)
Tal Yarkoni and Brian Keegan take a sober, de-escalating stance, observing that the reported effect is so miniscule that not only is the news reporting about “emotion manipulation” overwrought: the main contention should be that the authors are overselling a non-effect.
Second, they speak to framing as well: Tal Yarkoni does so implicitly when he contextualizes research and emotion manipulation as everyday reality:
"The reality is that Facebook–and virtually every other large company with a major web presence–is constantly conducting large controlled experiments on user behavior. […] you should probably also stop using Google, YouTube, Yahoo, Twitter, Amazon, and pretty much every other major website–because I can assure you that, in every single case, there are people out there who get paid a good salary to… yes, manipulate your emotions and behavior! […] it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways.”
I perfectly agree – and make the obvious sociological move to ask: So if that’s the case, then why do people take offence in this case, but not in these other, day-to-day cases? I argue because of different framings activating different frame-specific norms and values.
Brian Keegan is explicitly on the point, speaking of framing. Every online service these days, he states, employs A/B testing:
"Creating experiences that are “pleasing”, “intuitive”, “exciting”, “overwhelming”, or “surprising” reflects the fundamentally psychological nature of this work: every A/B test is a psych experiment.
Indeed: “why does a framing of ‘scientific research’ seem so much more problematic than contributing to ‘user experience’?” Because these are different frames contextually actualizing different norms and values.
Updates (June 30)
Wow. Lots of things can happen in a day. We’ve learned the data collection wasn’t IRB-approved, but the data analysis was as a study of “pre-existing data”. One of the authors of the study responded. Cornell corrected their news story, stating the study was not, as claimed, externally funded. Michelle N. Meyer and Zeynep Tufekci have weighed in with interesting opinions – and I’ve tried to incorporate all those in the post above.