Settings

Theme

Facebook emotion manipulation study: an explanation

facebook.com

72 points by mrmaddog 11 years ago · 101 comments

Reader

yazinsai 11 years ago

This comment on the piece aptly sums up my response, it's by Sean Tucker:

Adam, I don't know you -- I came here from the Buzzfeed article criticizing the ethics of the study that linked to this post, but it appears we do have a friend in common.

I just have to ask - you honestly had a hypothesis that amounted to 'perhaps we can make people more depressed,' and decided to test it on a group that hadn't consented to the experiment, with no way to track its impact on their actual lives, only on the language they used in their Facebook posts? And you ask us to trust that this passed an internal review so it's ethical?

Please take a moment to step back and consider that. That appears to have been the train of thought that led to this.

That's appalling. Completely appalling. The Atlantic piece is right -- there's absolutely no way this passes APA deceptive research standards.

Beyond that, you'll never know what impact this actually had on depressed people. You can only measure what they posted to Facebook, which isn't a particularly meaningful or realistic indicator of their emotional state.

If this passed an internal review board, that's only proof that Facebook's internal review standards aren't what they need to be.

You're in a position of extraordinary power, with access to more subscribers than any other field study in history, a larger population than most nations, and subject only to how you review yourselves. You could deceive yourself into believing you have informed consent because everyone clicked 'accept' on the Terms of Service years ago, but there's no way even you think that's a meaningful standard.

I trust you're a reasonable person who doesn't set out to cross ethical boundaries. But on this one, I think Facebook needs to admit it did and make some changes. This study was unethical by any reasonable standard. There's nothing wrong with admitting that and figuring out a way to do better.

There's a lot wrong with going ahead with anything like this, ever again.

  • zaroth 11 years ago

    This is Facebook. Is has a tremendous effect on peoples lives. And that's precisely why it's worth $100B. This is nothing more than a tweak in their ranking algorithm -- much larger and even targeted/dynamic modifications occur all the time. How can a reasonable person be upset by this?

    If the study is unethical by any reasonable standard, does this amount to condemning the whole company, even the whole industry? Because if this is unethical, you are calling out an extremely widespread practice...

    • zAy0LfpBZLC8mAC 11 years ago

      How can a reasonable person not be upset about someone intentionally making people feel bad without their consent?

      $foodcompany recently mixed some small amounts of a known poison into their product to see how their customers would react. This is nothing more than a tweak in their recipe - much larger modifications occur all the time. How can a reasonable person be upset by this?

    • nl 11 years ago

      I'd normally be with you in defending Facebook here, but in this case the ethics really are questionable.

      They designed an experiment where there was a serious hypothesis that it could lead to depression in the people subject to it. That has the potential to be actually harmful.

      I still defend Facebook's right to do research, but they need to take more care to avoid harm.

      The industry as a whole doesn't perform experiments designed to depress people. There could be other unethical experiments too, but they need to be judged on a case by case basis.

    • 3rd3 11 years ago

      There is no difference between tinkering with an algorithm with the intention to negatively influence people's emotions and tinkering with an algorithm to make it work better?

      • zaroth 11 years ago

        Who is to say those two statements are at odds with one another?

        Also, almost every A/B which results in a measurable impact in user response will have done so by positively vs. negatively impacting the users' emotions on the A/B legs of the study.

        But more to the point, this is not even a fair characterization of what Facebook did. The made a tweak to their ranking algorithm which they deployed to 1/2500 of their users. Much later they discovered there was a extremely small but measurable impact to those users' engagement, and the types of words they included in their subsequent posts. They actually didn't know at all ahead of time what the effect would be, and in fact the effect they observed was the opposite of the widespread expectation.

r0h1n 11 years ago

Wow, what an amazingly tone-deaf post. It demeans Facebook users who were offended at the company's actions into simpletons who did not really understand what the study was really about. Sample this:

> Nobody's posts were "hidden," they just didn't show up on some loads of Feed.

How is hiding any different from not showing up?

> And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it.

Not what your own study claimed.

> I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused.

We are not sorry about the research, but only for "the way the paper described it".

> In hindsight, the research benefits of the paper may not have justified all of this anxiety.

In hindsight, our users are hyperventilating frogs. They should learn how to relax in the nice warm(ing) Facebook waters.

  • zaroth 11 years ago

    I am really trying to understand both sides of this. Focusing on the actual research, what about it do you think requires informed consent?

    Do you understand how widespread this kind of research is? Literally everyone does this.

    The act of publishing can't be the ethical breach -- just focus on the research, what do you think they did wrong there?

    • naterator 11 years ago

      > Do you understand how widespread this kind of research is? Literally everyone does this.

      One of the main objections I'm seeing from people (in my bubble) isn't that Facebook did this, but that Cornell, UCSF, and PNAS participated in this. Facebook can do this, and while it's unethical it's not illegal. Same goes for manipulative people in your everyday life (let me not tell you about the horrific human being of a girlfriend I once had). The point is that science and the people who purport to carry it out should be held to higher and rigorous ethical standards. If those standards are not met, those people should be excluded from science and their findings ignored. They should not be awarded serious consideration in a journal such as PNAS. That is what is happening here as far as I can see, and while a bit dramatic fashion I think it is correct.

      Also, if I may toss my personal interpretation of the research into this... ethics aside, the study is extremely weak, and I honestly don't see how it can be published in such a "good" journal. The effect size was < 0.0001. They hand-wavingly try to explain that this is still significant given the sample size. I'm personally not convinced, at all. Sounds like they needed a positive conclusion out of the study and so they came up with a reason for one. If this landed on my desk for review I would have reject on that alone.

      • zaroth 11 years ago

        OK, this is interesting. I don't think this is people turning their nose up at it, just because [insert endowing authority here] is somehow seen to have endorsed it. Apparently someone actually did something wrong here...

        If it's the case they should be held to a higher standard simply because it was academic research, it's seems like a terribly inconsistent position to take. But if true, then FB walked right into it, and all we can do is shrug.

      • vacri 11 years ago

        Ethics committees can and do give the all-clear to experiments that have a negative impact on people, as long as the experimental procedure is generally tight, anonymous, information is well-controlled with little scope for leakage or abuse, and with a potential for a result that is solid and informative enough to be worth the inconvenience or other negative impact.

    • jamesbrownuhh 11 years ago

      Perhaps it could be summed up as "Attempting to negatively influence a visitor's mental state without their knowledge or consent IS NOT COOL."

      • polarix 11 years ago

        The assertion that this study refuted was that too much positive bias in a filtered source of information causes negative sentiment. Reducing said positive bias had heretofore unknown effects. The attempt was to learn, not to negatively influence anyone's mental state.

        • chris_wot 11 years ago

          Just like the Milgram Experiment was designed to see whether a small number of people could be coerced into believing they are torturing and killing people when so ordered? It wasn't designed to cause mental distress, but to see what happened.

          You see, that's why there are ethics committees at Universities.

      • onewaystreet 11 years ago

        So much advertising could be said to negatively influence a person's mental state. These are not simple questions with simple answers.

        • jamesbrownuhh 11 years ago

          Wholly different thing.

          Let's draw an analogy here. Let's say that Google decided to conduct an experiment on its Glass users, and that for a period of a week, Google decided to see if it could alter the mental state of its customers by using Glass to delete or diminish positive social interactions. Let's assume this research was conducted without any kind of consent or knowledge of the customers being experimented on. What's your immediate reaction to that? Still cool? Still harmless and just like advertising and A/B testing? Or, creepy and dangerous?

          Like it or not, Facebook does have a special responsibility here. It is, quite literally, the lens through which people see their world.

          • zaroth 11 years ago

            Google Glass absolutely will have to carefully rank the content that is displayed on its interface. The algorithms for such ranking are surely ripe for R&D and competitive advantage, and as such, will be constantly evolving and being tested. If this Facebook test creeps you out, there is literally nothing about Google Glass which should NOT have you running for the exits.

          • vacri 11 years ago

            onewaystreet did not say advertising was harmless; rather, the opposite. It's interesting that you think advertising is harmless, not creepy, and not dangerous.

      • garethadams 11 years ago

        Has someone told Fox News?

      • zaroth 11 years ago

        Would it be OK to just try to positively influence a visitor's mental state? What about negatively influencing your opinion about something? What about making you feel scared a particular event might happen to you? What about making you feel like you need a pick-me-up?

        ...What about enticing the maximum number of people to click on your buttons for the maximum amount of time, with no purpose but capturing their eyeballs?

        • jamesbrownuhh 11 years ago

          Most of those are quite different things, and obviously so if you think about them.

          "Do customers prefer the green button or the blue button" is harmless. Psychological testing is NOT harmless and NOT something that anyone with a website can or should do. That is medical research - and as such is strictly controlled precisely because of the real risks and genuine human cost that it can have if conducted inexpertly.

          • akerl_ 11 years ago

            > "Do customers prefer the green button or the blue button" is harmless.

            That is a psychological test.

            • jamesbrownuhh 11 years ago

              Not one which is specifically designed to alter someone's emotional mental state. The difference is quite obvious, surely.

              • zaroth 11 years ago

                Actually, no. You are putting the cart before the horse. What actually happened is that they tweaked the ranking algorithm, and then measured a minuscule effect in a particular scoring algorithm (in this case counting certain types of words used in future posts).

                So the nature of the scoring algorithm (counting emotional words) used to measure the impact of a change makes deploying the A/B test suddenly unethical?

              • vacri 11 years ago

                Most advertising is specifically designed to alter someone's emotional mental state, and plenty of that is in a negative direction. Would you also outlaw advertising? Why should advertising get a free pass and not well-controlled psych testing? What about signs warning you not to infringe on [random local law] under threat of penalty? They create a sense of oppression. Should they be forbidden?

  • onewaystreet 11 years ago

    It's easy to say that Facebook should apologize and implement an opt-in for research, but the problem is, what's the difference between a research experiment and an A/B test?

    • zAy0LfpBZLC8mAC 11 years ago

      There isn't. But there is a difference between intentional deception and testing technical functionality in the interest of the user.

  • Istof 11 years ago

    I think the experiment is still going on as it is not possible to show all posts in chronological order at all times

gemma 11 years ago

I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

Ignoring the classic "I'm sorry you were freaked out" non-apology here, this response completely misses the point, and tries to re-frame the concept of informed consent into an inconsequential piece of red tape that anxious people worry about.

People were made subjects of a scientific experiment without their knowledge or consent. It doesn't matter that Facebook took steps to make the changes as small as possible; it doesn't matter that the effect on the individual subjects may have been minor; it doesn't matter that the results were interesting. It was an ethical breach, and this tone-deaf response is fairly unsettling.

  • elpool2 11 years ago

    Is it an ethical breach if I don't get informed consent from users when I do A/B testing on my website? Or is it only unethical if I publish the results afterwards? Or is it only if I'm actually trying to affect the user's mood specifically?

    • gemma 11 years ago

      If your A/B testing is intentionally inducing a negative mental state in your users without letting them know up front, I'd say that's probably an ethical breach.

      In general, A/B testing is studying the website. You're not testing us, you're testing the best ways to convince us to do stuff. On top of that, when users visit your website (which is presumably pitching something to them), they know they're being pitched. They know the website is going to try to convince them to do stuff--click over here, watch this video, sign up for foogadgetmcawesome. Same drill with advertising. Yeah, this commercial with the freaking Budweiser puppy made me cr--er, chop onions--in its attempt to get me to buy beer, but I know they're trying to sell me beer, and I knew as soon as the puppy hit the screen that I'd probably start sniffling.

      • elpool2 11 years ago

        That's a good point about the distinction between studying the website and studying people, but I'm not sure that's really the line that was crossed. If the study had asked "Does this newsfeed algorithm make people sad?" instead of "Do positive posts from friends make people sad?", would that then be ethical? It's still manipulating people the same way, and I think people would still find it creepy. What if it was the exact same algorithm, but instead they asked "Does this make people click more ads?", is it now an ethical A/B test just because the intent is different?

      • zaroth 11 years ago

        A/B tests look for user response to stimulus or change. This study certainly did nothing more. The ethics of a deliberate action can certainly turn on its purpose, but I strongly doubt this is such a case.

      • benastan 11 years ago

        I don't see the distinction. It's fine to market your product, but there is such a thing as distortion and as a scam. Providing a fair representation of your product factors into ethics, as does Facebook fairly presenting the feed of your friends news, with a reasonable attempt at NOT distorting it.

        • gemma 11 years ago

          I agree! I'd put unethical behavior like distortion and lying in a different category than Facebook's, but I agree. I was trying to kill the parallel between Facebook intentionally manipulating user emotions for science, and Budweiser intentionally manipulating viewer emotions for beer sales. Video games work here too: Amnesia scares the unmentionables out of me, but I'm ok with that, because I know that's the whole point.

          Another missing component here is user control. I can turn off the commercial and stop playing the game. I'm currently working (in a minor capacity) on the National Children's Study, and our participants can walk away at any time. When the Facebook subjects on the negative side started feeling slightly more sad, they had no idea why, and no clue how to stop it.

    • cududa 11 years ago

      This is where the real discussion needs to focus. These are the hard questions we need to consider

    • gammarator 11 years ago

      A/B tests are different, though: in aggregate, they should make your website better for me, the user. Here the user is just fodder for an academic's fairly trivial journal article (my friends' feelings affect my feelings!).

      Who decides what experiments social scientists get to run on 1/7th of the world's population?

      • akerl_ 11 years ago

        They only make the site better for you if you're on the "winning" side. If A/B tests made the site better for everyone being tested, they'd not be very useful. Unless you mean "they should make your website better for me [after the test is done]", in which case they have the same goal as Facebook's study.

        • gammarator 11 years ago

          That's what I said: A/B tests improve the site "in aggregate."

          The goal of this study was not to improve the site, though: it was to test a hypothesis about social psychology.

          • yincrash 11 years ago

            The goal of the study _is_ to improve the site. Facebook wants the users to return to the site, and users who become happier when using the site do that. Facebook isn't paying its data scientists to perform studies that have no meaningful use.

          • akerl_ 11 years ago

            I sincerely hope you're not making the case that testing a hypothesis for a scientific paper is not a valid reason to perform such an experiment, but improving a website is.

    • jacquesm 11 years ago

      That depends on the goal of your A/B test. I can see a ton of possibilities where A/B testing could be used for un-ethical purposes.

OrwellianChild 11 years ago

Am I alone in being completely un-surprised and accepting of Facebook doing research in this way on its users? The News Feed as an interface for using FB is and has always been an evolving product and in the control of FB. They've been constantly messing with it, A/B testing different combinations of user/advertiser content and post/comment structure. They do this to test outcomes, and achieve goals that most likely include (and are certainly not limited to):

  - Optimizing ad targeting
  - Maximizing click-thru
  - Maximizing engagement
  - Minimizing abandonment/bounce rates
The News Feed already doesn't show you all your friends' posts and hasn't for quite some time. How they choose to "curate" what they do show is going to be dictated by their incentives/needs.

Getting outraged about any of this seems akin to getting pissed that the new season of your favorite TV show sucked...

Edited for formatting...

  • kevingadd 11 years ago

    You're not alone, but I think you do yourself a disservice by underestimating the potential negative impact of an experiment designed for the purpose of manipulating a customer's emotions. There's no defensible business objective here, and furthermore, any business objective that relies on this kind of emotional manipulation is suspect.

    Intentionally making a customer depressed is not something you fuck around with. It's incredibly dangerous, and doing it to a huge subset of your audience with no mechanisms to ensure you don't inflict real harm is utterly reckless and irresponsible.

    The individual mechanisms and approach used here for the experiment are not, in and of themselves, objectionable. Many A/B tests are wholly defensible. The end goal and process are the problem here.

    • amirmc 11 years ago

      > "... any business objective that relies on this kind of emotional manipulation is suspect."

      So all modern day advertising then, that attempts to create a connection with the viewer/reader (as opposed to presenting bare facts).

      I agree that there was a process problem from the point of view of a research study but had this just been an A/B test for engagement levels, we might never have known what Facebook did.

gammarator 11 years ago

When addressing criticism, it's not great to start "Ok so." (Or "Calm Down. Breathe. We Hear You" [1], to dig farther back in time.)

[1] http://www.facebook.com/notes/facebook/calm-down-breathe-we-...

  • baddox 11 years ago

    Do you have any thoughts about the explanation itself? I made it past the first two words, and thought the explanation was fairly reasonable and satisfying.

    • gammarator 11 years ago

      The last paragraph does not give me warm feelings about Facebook's internal controls on this kind of research:

      "While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then."

potatolicious 11 years ago

I'm going to try and avoid the minefield that is this research vs. ethics.

One thought I've had is that the blowback against this incident is less about the research itself and how ethical it is, and more about perception of Facebook in general. My suspicion is a lot of the opposition at this point comes from long-simmering distrust of Facebook and the increasingly negative perception of its brand - this incident is merely the straw that broke the camel's back, for some.

And if the popular response to this revelation reflects people's general views on Facebook, it's not good for the company.

  • polarix 11 years ago

    It's also, unfortunately, deeply harmful to humanity.

    http://www.talyarkoni.org/blog/2014/06/28/in-defense-of-face...

    > by far the most likely outcome of the backlash Facebook is currently experience is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature

  • mercer 11 years ago

    I'm still surprised to discover how many of my friends and acquaintances have a negative view of facebook. Many of them are not very critical of things (and care little about their privacy, for example).

    The sentiment is generally something like 'I use facebook because it is too inconvenient not to, but I don't like it', which is a far cry from the initial 'facebook is this cool new thing that I wish more of my friends would use instead of <insert usually shitty local social network>'.

    In that light it makes sense for facebook to acquire up-and-coming business that compete with them, directly or indirectly, and I imagine there are quite a few people at the company who worry about this situation.

    And in that light it is especially strange for facebook to release a study like this. What did they think would happen?

    There have been quite a few instances over the past months (or years) that really made me wonder whether facebook's biggest problem, as a company, is that they're stuck in a bubble. A newsfeed that seems to be made for specific types of users, privacy kerfuffles, apps that don't seem to take off, and so on.

    'Dogfooding' is generally a smart approach, but it doesn't seem like the optimal approach when your product relies on the whole world for its success...

nness 11 years ago

Not a single point in that response directly addresses why there was no informed consent in the study. There are reasons why research goes under ethical review before it is conducted, and this is certainly going to be a tough lesson for Facebook.

  • thisrod 11 years ago

    Experimenters have tell fibs in order to discover the truth about your behavior: the only way to prevent you from thinking of the word "elephant" is say the experiment is about cats. This is generally considered OK, provided that you won't be upset when you find out what was really going on.

    And, without the benefit of hindsight, it is hard to see why people would get upset about this experiment. I reckon Facebook users are more upset by the truth, that their "personal" feelings are so influenced by trivial aspects of their environment, than by the way Facebook demonstrated it. The truth sometimes hurts, but science isn't to blame.

    • nness 11 years ago

      That's not quite right; Even in circumstances where participants are mislead for other purposes, they are still informed that they are taking part in an experiment.

      The issue here is that Facebook conducted behavioural experiments on participants whom were not informed that they were part of a study. It is unethical. Whilst the outcomes are tame for those involved, the shear number of those involved and Facebook's influence and presence in everyday life makes it all the more alarming that they attempted it in the first place.

    • amirmc 11 years ago

      Those people generally get to decide for themselves if they even want to be part of the experiment. They also have the right to opt-out at any given time during the experiment. Neither of those happened here.

  • onewaystreet 11 years ago

    The study was approved by an ethical review board.

    • why-el 11 years ago

      The parent is talking about subject consent, not board approval. These are two vastly different things.

      • onewaystreet 11 years ago

        >There are reasons why research goes under ethical review before it is conducted

        Did you miss that line?

        • jacquesm 11 years ago

          > Did you miss that line?

          Do you understand the word 'consent'?

          • onewaystreet 11 years ago

            The line I quoted implies that the OP thinks that the research did not go under ethical review because if it had they would have required informed consent. But it did and they did not.

            • jacquesm 11 years ago

              Ethical review by facebook is not ethical review.

              That's like asking a serial killer about his opinion of murder.

              • onewaystreet 11 years ago

                It was also reviewed by a Cornell IRB.

                • jacquesm 11 years ago

                  I'm fairly sure they will have some discussion about this there.

                  Look, mistakes were made. These could have been addressed with a suitable apology. Instead you get this bs and hiding being 'but it was reviewed' when clearly there is a disconnect between the way this study is perceived by the subjects and the people that conducted it and those that reviewed it.

                  Maybe it is that facebook is not exactly associated with ethical conduct, maybe it is that the review boards were asleep at the switch, maybe it is that in a corporate context ethics is so far removed from the stated goals that such discussions are meaningless.

                  But for all the review that took place and all the points in time at which this study could have been halted the fact that nobody steps up and says: "we were wrong to do this" tells you more than enough about the state of ethics at facebook.

                  And I'm not at all surprised, there is a good reason I don't have an account there. But I'm still surprised at how bad they are handling the fall-out from this. They're essentially making it worse, not better.

                  Do you work for facebook?

                  • onewaystreet 11 years ago

                    All I was doing was correcting the OP. You can't ask why it was approved if you think it wasn't. The reason why Facebook isn't apologizing is because they do this kind of research all the time in the form of A/B tests. It opens a can of worms for them to suggest they did something wrong.

    • nness 11 years ago

      It appears I did make a mistake, the university did conduct an ethics review.

      Although, I find that is all the more concerning, since you would hope that the university in question would have more ethical clout than an corporation.

  • cududa 11 years ago

    I believe the study was highly unethical. However, in many studies participants have no idea what's being studied about them. The participant content angle seems less ominous than the rest of it, and might be detracting from discussing more serious ethical implications that come from optimizing user engagement and it's derivative forms.

brainsareneat 11 years ago

I wonder where the line is between A/B testing and 'psychological experimentation' and when it's been crossed. Was it crossed just because it was published in PNAS? The outraged don't seem to think so.

What if I'm Amazon or Yelp, and I want to choose review snippets? Is looking for emotionally charged ones and testing to see how that impacts users wrong?

What if it's more direct psychological manipulation? What if I run a productivity app, and I want to see how giving people encouraging tips, like 'Try starting the day by doing one thing really well.' impacts their item completion rate. I'm doing psychological experimentation. I'm not getting my users' permission. But I am helping them. And it's a valid question - maybe these helpful tips actually end up hurting users. I should test this behavior, not just implement it wholesale.

It seems like Facebook had a valid question, and they didn't know what the answer was. Did they go wrong when they published it in PNAS? Or was it wrong to implement the algorithm in the first place? I don't think it was.

  • kevingadd 11 years ago

    The main indications that a line was crossed are the combination of a blatantly questionable hypothesis ('we can make people depressed or unhappy') with an intentional lack of informed consent & a huge set of test subjects.

    If you're doing medical testing in order to roll something out as a pharmaceutical for prescription use or over-the-counter sales, the tests are rolled out in stages, with initial testing being done on a very small number of patients under extreme scrutiny, and even that is only done after the medication has been vetted carefully using animal models. It's extremely important to avoid harming your test subjects.

    In comparison, they basically went full-steam on this experiment on hundreds of thousands of people despite the fact that emotional manipulation of this sort is EXTREMELY DANGEROUS. When I say extremely dangerous I mean potentially life-threatening.

natural219 11 years ago

For posterity's sake, I want to clarify what is going on here.

What's been happening, over the last five years, is that American society has become more trigger-happy in deducing "accurate" moral conclusions from following online media outlets.

I outlined some of this in cjohnson.io/2014/context, although I didn't appreciate the full power of this conclusion at the time, and so the essay mostly falls short to explain the entirety of what's happening currently.

In a nutshell: The Web has broken down barriers between contexts that used to live in harmony, ignorant of each other. Now, as the incompatibility of these contexts come into full focus, society has no choice but to accept the fluidity of context in the information age, or tear itself apart at the seams.

All that was needed to precipitate the decline of Facebook (oh yes, Facebook is going down, short now while supplies last) was some combination of words and contexts that fully elucidate the power of online advertising / data aggregation to have real impact upon people's lives. Put in terms that the "average person" can understand, the impact of this story will be devastating. I feel so bad for the Facebook PR team -- they're simply out of their league here.

The reason this scandal will be the one we read about in the history books is because it provides the chain link between two separate, but very powerful contexts: 1, the context of Nazi-esque social experimentation, and 2, the run-of-the-mill SaaS-style marketing that has come to characterize, well, pretty much every large startup in the valley.

We've reached a point where nobody knows what is going to happen next. Best of luck, people.

smokeyj 11 years ago

> The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product.

That.. or getting users to click more ads.

Maybe ad inventory can be optimized for people in an certain emotional state, and the user's wall can be used to induce the most profitable state of mind for fb's available ad inventory. That would be awesome in an evil villain kind of way.

  • baddox 11 years ago

    > That.. or getting users to click more ads.

    Or both, with no contradictions at all.

zaroth 11 years ago

It seems to be that Facebook did something like an A/B test of their ranking algorithm, and then did a cohort study to see if there were any longer term impact.

If what they did requires informed consent, what about when the end goal is maximizing click through rate, e.g. by inciting an emotional response?

Let's say FB finds that certain types of posts in the feed cause a nearby ad to be clicked on more. They determine this through pervasive testing of everything that you see and do on their site. They could then adjust their algorithm to account for this behavior to increase clicks/profit.

I think the actions FB takes to monetize the user base are not only more intrusive by far, they are actively searching for and exploiting these effects for profit. If informed consent for TFA is not ridiculous, then I think we have much bigger problems on our hands? What am I missing about the informed consent issue?

  • kevingadd 11 years ago

    You're not missing anything. Informed consent is important for many things currently classified as A/B tests and it isn't currently acquired by anyone. There are many existing products that rely on A/B tests to identify the best way to psychologically manipulate (if not addict) their customers (F2P games, online gambling, etc.)

gouggoug 11 years ago

For those like me who lived under a rock, here's the study: http://www.pnas.org/content/early/2014/05/29/1320040111

  • toufka 11 years ago

    In their summary there's a pretty massive gap in their lack of obtaining informed consent and how dangerous that might be:

    "Significance:

    We show, via a massive (N=689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues."

xenadu02 11 years ago

Simple fix:

"We're conducting a scientific study on mood for the next three months. Would you mind participating? The study won't require you to do anything special or take any further action. Y/N?"

There we go kids, easy as pie; it's called consent to participate! Pretty easy to do without spilling the beans on the study.

  • chris_wot 11 years ago

    This is a wider issue, but how do they prevent knowledge of the experiment adversely affecting the results?

akerl_ 11 years ago

Based on the other comments, it appears that the difference between kosher A/B testing and unethical experimentation with dire mental health consequences is either "Did you publish a paper when you were done" or "Are you a big, noteworthy company".

  • elpool2 11 years ago

    I'd say that (based on comments here) the real different is intent. People seem to be suggesting that the exact same experiment, but measuring ad-clicks instead of emotional state, would be a perfectly fine A/B test. It doesn't matter that the experiment manipulated people's emotions, it matters that Facebook wanted to manipulate people's emotions.

cwal37 11 years ago

Universities and think tanks have internal review boards for a reason. The fact that Facebook can make use of its users' data in other (business-related) avenues does not excuse them from the most basic of research ethics.

It did pass the muster of Cornell's review board, but that was after the data was actually collected, which amounts to ex post facto approval.

datakid 11 years ago

There's not really any excuse you can give at all, FB. I'm astounded that you didn't apologise without reservation and implement an end use agreement change that specified you wouldn't do it again.

Messing with people's mental health is outrageous.

onewaystreet 11 years ago

People calling this unethical have to explain why it is but A/B testing is not.

  • kevingadd 11 years ago

    You seem to assume that A/B testing is implicitly ethical and this is unethical. That's not the case. The reason this experiment is considered unethical is due to the specific details of the experiment and other common A/B tests could easily be considered unethical if examined using the same criteria.

    This particular experiment is getting scrutiny because they published the results in a journal and it is a rather egregious example (the hypothesis, and test methodology, both demonstrate a particular brand of recklessness you don't tend to see in website A/B testing.)

    EDIT: For reference, I know a lot about this subject because I worked extensively at IMVU, one of the first consumer-facing websites to aggressively A/B test almost everything. I don't have a luddite perspective on this.

  • baddox 11 years ago

    Not even just A/B testing. Any form of interaction which is intended to alter the mood or beliefs of any person, including visual design, advertising, marketing, etc.

    • dublinben 11 years ago

      >advertising, marketing, etc.

      I have no problem claiming that this is usually unethical.

      • baddox 11 years ago

        I strongly disagree, but fair enough, if you're consistent. I'm curious, what is your ethical reasoning behind that? Do you oppose markets, or just communication from producers to consumers in markets?

        • sooheon 11 years ago

          The active pursuit of more effective means of psychological manipulation seems to be the reason most people have a beef with marketing. Marketing is never about fully honest, fact based "communication from producers to consumers." Distortion, concealment, and need-creation is more marketing's forte. Marketers don't say: "this is our product, buy it if you need it." They say: "here is why you want to buy our product."

  • pdkl95 11 years ago

    Asking someone's opinion of something is generally fine. They can choose to not give it to you and the have that opportunity in advance. Your typical "focus group", for example, is obviously voluntary, because participation is usually opt-in.

    Changing a product you provide to a customer is often fine, though there could be contract or consumer-protection law or other legalities to consider. This may be true even when your product is free ("see a lawyer"). So changing your website and trying it on a handful of your audience first is also fine. Assuming no legal issues, there is a general understanding that fixes and improvements happen, and that the client can choose not to participate at any time, so that situation is probably fine as well.

    Also intent counts for a lot - "trying to find a better search tool" or other technical features are clear in what they are intending to accomplish: bugfixes and/or new features.

    The problem starts when you are trying to experiment on people directly, where the entire goal of the project is to poke at people and see how they react. After seeing how that kind of activity can end up in WW2, we decided it was a far better idea to put some precautions in place. It's annoying (and makes medical testing MUCH more expensive), but this is one of those situations where it's better to be overly cautious.

    What I don't understand about FB is that - compared to experiments with SERIOUS risk such as drug testing - the experiment was rather benign. It shouldn't have been very difficult to get a proper IRB stamp of approval. The informed consent part could (I suspect) have been handled with some web page with an overview of the experiment and an opt-in button.

    (and no, opt-in wouldn't have affected the test if you do you statistics correctly and are careful in your language on the opt-in page)

    Failure to do this relatively easy steps is unprofessional at best, and highly suspicious at worst. Acting like human experimentation is not even worth of such protections makes me wonder what kind of person the experimenter is (stupid? or just badly narcissistic?).

    Actually doing such an experiment without the subject consent supplies the answer: the experimenter is both stupid and dangerously narcissistic.

    The dividing line, is largely: are you trying to do things to people behind their back? Or are you including them in the decision to participate (or not participate)?

    For more specific details, see your local ethics committee.

  • napsterbr 11 years ago

    A/B testing is for commercial purposes, it's wildly different when you are testing the user's psychological state.

    • onewaystreet 11 years ago

      But a lot of A/B testing does test a user's psychological state. One could argue that doing it for commercial purposes makes it worse.

      • kevingadd 11 years ago

        That's absolutely true. The main difference here is that the experiment's negative impact was unabstracted yet they still went ahead with the experiment. The hypothesis was more or less 'we can make people depressed', without the slightest hint of defensible motives or objectives ('we want to make our ads more effective', 'we want to find the best way to convince customers that they want our product'.)

    • yazinsai 11 years ago

      +1. It's all about the end goal.

  • fred_durst 11 years ago

    A/B testing is more about the action of the user, as opposed to purely the emotional state of the user. Maybe I'm not aware that it is happening, but I can't remember sitting in a meeting where someone said, lets see which landing page can make someone depressed.

alx 11 years ago

Does someone remember the name of the NSA program concerning psychological manipulation on social networks ?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection