Jupyter Notebook Viewer

25 min read Original article ↗

New data-driven journalists such as FiveThirtyEight have faced criticism from many quarters and the critiques, particularly around the naïveté of assuming credentialed experts can be bowled over by quantitative analysis so easily as the terrifyingly innumerate pundits who infest our political media [1,2,3,4]. While I find these critiques persuasive, I depart from them here to instead argue that I have found this "new" brand of data journalism disappointing foremost because it wants to perform science without abiding by scientific norms.

The questions of demarcating what is or is not science are fraught, so let's instead label my gripe a "failure to be open." By openness, I don't mean users commenting on articles or publishing whistleblowers' documents. I mean "openness" more in the sense of "open source software" where the code is made freely available to everyone to inspect, copy, modify, and redistribute. But the principles of open-source software trace their roots more directly back norms in the scientific community that Robert Merton identified and came to known as "CUDOS" norms. It's worth reviewing two of these norms because Punk-ass Data Journalism is very much on the lawn of Old Man Science and therein lie exciting possibilities for exciting adventures.

The first and last elements of Merton's "CUDOS" norms merit special attention for our discussion of openness. Communalism is the norm that scientific results are shared and become part of a commons that others can build upon --- this is the bit about "standing upon the shoulders of giants." Skepticism is the norm that claims must be subject to organized scrutiny by community --- which typically manifests as peer review. Both of these strongly motivated philosophies in the open source movement, and while they are practiced imperfectly in my experience within the social and information sciences (see my colleagues' recent work on the "Parable of Google Flu"), I nevertheless think data journalists should strive to make them their own practice as well.

  1. Data journalists should be open in making their data and analysis available to all comers. This flies in the face of traditions and professional anxieties surrounding autonomy, scooping, and the protection of sources. But how can claims be evaluated as true unless they can be inspected? If I ask a data journalist for her data or code, is she bound by the same norms as a scientist to share it? Where and how should journalists share and document these code and data?

  2. Data journalists should be open in soliciting and publishing feedback. Sure, journalists are used to clearing their story with an editor, but have they solicited an expert's evaluation of their claims? How willing are they to publish critiques of, commentary on, or revisions to their findings? If not, what are the venues for these discussions? How should a reporter or editor manage such a system?

The Guardian's DataBlog and ProPublica have each been doing exemplary work in posting their datasets, code, and other tools for several years. Other organizations like the Sunlight Foundation develop outstanding tools to aid reporters and activists, the Knight Foundation has been funding exciting projects around journalism innovation for years, and the Data Journalism Handbook reviews other excellent cases as well. My former colleague, Professor Richard Gordon at Medill reminded me ideas around "computer assisted reporting" have been in circulation in the outer orbits of journalism for decades. For example, Philip Meyer has been (what we would now call) evangelizing since the 1970s for "precision journalism" in which journalists adopt the tools and methods of the social and behavioral sciences as well as its norms of sharing data and replicating research. Actually, if you stopped reading now and promised to read his 2011 Hedy Lamarr Lecture, I won't even be mad.

The remainder of this post is an attempt demonstrate some ideas of what an "open collaboration" model for data journalism might look like. To that end, this article tries to do many things for many audiences which admittedly makes it hard for any single person to read. Let me try to sketch some of these out now and send you off in the right path.

  • First, I use an article Walt Hickey of FiveThirtyEight published on the relationship between the financial performance of films that the extent to which they grant their female characters substantive roles as a case to illustrate some pitfalls in both the practice and interpretation of statistical data. This is a story about having good questions, ambiguous models, wrong inferences, and exciting opportunities for investigation going forward. If you don't care for code or statistics, you can start reading at "The Hook" below and stop after "The Clip" below.
  • Second, for those readers who are willing to pay what one might call the "Iron Price of Data Journalism", I go "soup to nuts" and attempt to replicate Hickey's findings. I document all the steps I took to crawl and analyze this data to illustrate the need for better documentation of analyses and methods. This level of documentation may be excessive or it may yet be insufficient for others to replicate my own findings. But providing this code and data may expose flaws in my technical style (almost certainly), shortcomings in my interpretations (likely), and errors in my data and modeling (hopefully not). I actively invite this feedback via email, tweets, comments, or pull requests and hope to learn from it. I wish new data journalism enterprises adopted the same openness and tentativeness in their empirical claims. You should start reading at "Start Your Kernels..."
  • Third, I want to experiment with styles for analyzing and narrating findings that make both available in the same document. The hope is that motivated users can find the detail and skimmers can learn something new or relevant while being confident they can come back and dive in deeper if they wish. Does it make sense to have the story up front and the analysis "below the fold" or to mix narrative with analysis? How much background should I presume or provide about different analytical techniques? How much time do I need to spend on tweaking a visualization? Are there better libraries or platforms for serving the needs of mixed audiences? This is a meta point as we're in it now, but it'll crop up in the conclusion.
  • Fourth, I want to experiment with technologies for supporting collaboration in data journalism by adopting best practices from open collaborations in free software, Wikipedia, and others. For example, this blog post is not written in a traditional content-management system like WordPress, but is an interactive "notebook" that you can download and execute the code to verify that it works. Furthermore, I'm also "hosting" this data on GitHub so that others can easily access the writeup, code, and data, to see how it's changed over time (and has it ever...), and to suggest changes that I should incorporate. These can be frustrating tools with demoralizing learning curves, but these are incredibly powerful once apprenticed. Moreover, there are amazing resources and communities who exist to support newcomers and new tools are being released to flatten these learning curves. If data journalists joined data scientists and data analysts in sharing their work, it would contribute to an incredible knowledge commons of examples and cases that is lowering the bars for others who want to learn. This is also a meta point since it exists outside of this story, but I'll also come back to it in the conclusion.

In this outro to a very unusual introduction, I want to thank Professor Gordon from above, Professor Deen Freelon, Nathan Matias, and Alex Leavitt for their invaluable feedback on earlier drafts of this... post? article? piece? notebook?

Walk Hickey published an article on April 1 on FiveThirtyEight, titled The Dollar-And-Cents Case Against Hollywood’s Exclusion of Women. The article examines the relationship between movies' finances and their portrayals of women using a well-known heuristic call the Bechdel test. The test has 3 simple requirements: a movie passes the Bechdel test if there are (1) two women in it, (2) who talk to each other, (3) about something besides a man.

Let me say at the outset, I like this article: It identifies a troubling problem, asks important questions, identifies appropriate data, and brings in relevant voices to speak to these issues. I should also include the disclaimer that I am not an expert in the area of empirical film studies like Dean Keith Simonton or Nick Redfern. I've invested a good amount time in criticizing the methods and findings of this article, but to Hickey's credit, I also haven't come across any scholarship that has attempted to quantify this relationship before: this is new knowledge about the world. Crucially, it speaks to empirical scholarship that has exposed how films with award-winning female roles are significantly less likely to win awards themselves [5], older women are less likely to win awards [6], actresses' earnings peak 17 years earlier than actors' earnings [7], and differences in how male and female critics rate films [8]. I have qualms about the methods and others may be justified in complaining it overlooks related scholarship like those I cited above, but this article is in the best traditions of journalism that focuses our attention on problems we should address as a society.

Hickey's article makes two central claims:

  1. We found that the median budget of movies that passed the test...was substantially lower than the median budget of all films in the sample.
  1. We found evidence that films that feature meaningful interactions between women may in fact have a better return on investment, overall, than films that don’t.

I call Claim 1 the "Budgets Differ" finding and Claim 2 the "Earnings Differ" finding. The results, as they're summarized here are relatively straightforward to test whether there's an effect of Bechdel scores on earnings and budget controlling for other explanatory variables.

But before I even get to running the numbers, I want to examine the claims Hickey made in the article. The interpretations he makes about the return on investment are particularly problematic interpretations of basic statistics. Hickey reports the following findings from his models (emphasis added).

We did a statistical analysis of films to test two claims: first, that films that pass the Bechdel test — featuring women in stronger roles — see a lower return on investment, and second, that they see lower gross profits. We found no evidence to support either claim.

On the first test, we ran a regression to find out if passing the Bechdel test corresponded to lower return on investment. Controlling for the movie’s budget, which has a negative and significant relationship to a film’s return on investment, passing the Bechdel test had no effect on the film’s return on investment. In other words, adding women to a film’s cast didn’t hurt its investors’ returns, contrary to what Hollywood investors seem to believe.

The total median gross return on investment for a film that passed the Bechdel test was $2.68 for each dollar spent. The total median gross return on investment for films that failed was only $2.45 for each dollar spent.

...On the second test, we ran a regression to find out if passing the Bechdel test corresponded to having lower gross profits — domestic and international. Also controlling for the movie’s budget, which has a positive and significant relationship to a film’s gross profits, once again passing the Bechdel test did not have any effect on a film’s gross profits.

Both models (whatever their faults, and there are some as we will explore in the next section) apparently produce an estimate that the Bechdel test has no effect on a film's financial performance. That is to say, the statistical test could not determine with a greater than 95% confidence that the correlation between these two variables was greater or less than 0. Because we cannot confidently rule out the possibility of there being zero effect, we cannot make any claims about its direction.

Hickey argues that passing the test "didn't hurt its investors' returns", which is to say there was no significant negative relationship, but neither was there a significant positive relationship: The model provides no evidence of a positive correlation between Bechdel scores and financial performance. However, Hickey switches gears an in the conclusions, writes:

...our data demonstrates that films containing meaningful interactions between women do better at the box office than movies that don’t...

I don't know what analysis supports this interpretation. The analysis Hickey just performed, again taking the findings at their face, concluded that "passing the Bechdel test did not have any effect on a film’s gross profits" not "passing the Bechdel test increased the film's profits." While Bayesians will cavil about frequentist assumptions --- as they are wont to do --- and the absence of evidence is not evidence of absence, the "Results Differ finding" is not empirically supported in any appropriate interpretation of the analysis. The appropriate conclusion from Hickey's analysis is "there no relationship between the Bechdel test and financial performance," which he makes... then ignores.

What to make of this analysis? In the next section, I summarize the findings of my own analysis of the same data. In the subsequent sections, I attempt to replicate the findings of this article, and in so doing, highlight the perils of reporting statistical findings without adhering to scientific norms.

I tried to retrieve and re-analyze the data that Hickey described in his article, but came to some conclusions that were the same, others that were very different, and still others that I hope are new.

In the absence of knowing the precise methods used but making reasnable assumptions of what was done, I was able to replicate some of his findings, but not others because specific decisions had to be made about the data or modeling that dramatically change the results of the statistical models. However, the article provides no specifics so we're left to wonder when and where these findings hold, which points to the need for openness in sharing data and code. Specifically, while Hickey found that women's representation in movies had no significant relationship on revenue, I found a positive and significant relationship.

But the questions and hypotheses Hickey posed about systematic biases in Hollywood were also the right ones. With a reanalysis using different methods as well as adding in new data, I found statistically significant differences in popular ratings also exist. These differences persist after controlling for each other and in the face of other potential explanations about differences arising because of genres, MPAA ratings, time, and other effects.

In the image below, we see that movies that have non-trivial women's roles get 24% lower budgets, make 55% more revenue, get better reviews from critics, and face harsher criticism from IMDB users. Bars that are faded out mean my models are less confident about these findings being non-random (higher p-values) while bars that are darker mean my models are more confident that this is a significant finding (lower p-values).

Movies passing the Bechdel test (the red bars):

  • ...receive budgets that are 24% smaller

  • ...make 55% more revenue

  • ...are awarded 1.8 more Metacritic points by professional reviewers

  • ...are awarded 0.12 fewer stars by IMDB's amateur reviewers

Now that we have data in hand about movies, their financial performance, and their performance on the Bechdel test from the data sources that Hickey descibed in the article, we can attempt to replicate the specific variables used. Because the article does not make clear which data or definitions were used for making these constructs, we're left to infer what exactly Hickey did.

The article claims to use two specific outcomes: return on investment and gross profits which are different ways of accounting for the relationship between income and costs. I don't claim to be an accounting or business expert, but "gross profit" is traditionally defined as "income minus costs" ($P = I - C$) while "return on investment" (RoI) is traditionally defined as profits divided by assets ($ROI = P/A$). Thus we need at least three variables: income, costs, and assets.

However, only two of these are available in the movie financial data we obtained from The-Numbers.com: income and costs. We can calculate profits easily, but it's unclear what a movie's assets are here unless we use costs again. Again, Hickey may have done something different for make this variable, but we don't know so we can't replicate.

In order to get a consistent look at budget data, we’re going to focus on films released from 1990 to 2013, since the data has significantly more depth since then... We rely on median, as opposed to average, budgets to minimize the effect of outliers, or in this case, huge blockbusters whose budgets are orders of magnitude larger than the typical movie’s budget. We ran a statistical test analyzing the inflation-adjusted median budgets of films, and found that films passing the Bechdet Test had a median budget that was 16 percent lower than the median budget of all films in the set.

With skewed data choosing medians doesn't always help, so I'll log-transform the data instead to enforce more normal distributions. We can quantitatively test this hypothesis by regressing budget against these categories. Indeed, the -0.2504 estimate for C(rating)[T.3.0] confirms the finding that movies passing all 3 Bechdel dimensions have significantly lower budgets than movies that pass none of the Bechdel dimensions. The model predicts that movies that fail every Bechdel dimension have budgets of $13.6 million on average while movies that pass every Bechdel dimension have budgets of $10.6 million on average --- a $3.0 million dollar or 22% difference.

But this simple model does a poor job explaining the distribution of data -- it explains 2.3% of the variance. Furthermore, we can also chalk this difference up to a number of other explanations as well: the Bechdel test may be standing in for differences in other variables such as genre, year, time of year, and composition of the cast and crew, among many possibilities. We'll explore some of these later.

It looks like movies satisfying 2 or more of the Bechdel provide much better ROI, but we'll need to run the model to see if these differences are significant.

...we ran a regression to find out if passing the Bechdel test corresponded to lower return on investment. Controlling for the movie’s budget, which has a negative and significant relationship to a film’s return on investment, passing the Bechdel test had no effect on the film’s return on investment.

Our model replicates the same findings described in the article: negative and significant relationship with budget and no significant relationship with ratings. Thus, what appears to be higher ROIs actually isn't a significant difference.

However, it's important to note that we could have estimated this model differently by coding the rating as a continuous variable or not logging the budget. The article isn't clear whether it does this, and while estimating such a "bad" model (ROI ~ rating + Budget) doesn't change the significance or direction of the findings, it does produce a much poorer model (explaining only 2% of the variance, versus the 12% below).

The article ran a second regression model using gross profits as an outcome.

we ran a regression to find out if passing the Bechdel test corresponded to having lower gross profits — domestic and international. Also controlling for the movie’s budget, which has a positive and significant relationship to a film’s gross profits, once again passing the Bechdel test did not have any effect on a film’s gross profits.

It's unclear why Hickey expected this outcome to differ given that it's another relationship between the original Budget and Revenue data. The article also mentions international receipts when the data available from The-Numbers.com only reports domestic receipts, so we can't replicate this finding.

We find, unsurprisingly, that the same results as above hold that Bechdel scores don't significantly influence Profit and Budget has a negative relationship with Profit.

The model above inexplicably use "Budget" on both sides of the equation, which is a big no-no. Remember, we constructed ROI as $(Revenue - Budget)/Budget$ and Profit as $Revenue - Budget$ so in these models budget ends up being a function of itself.

What happens if we just leave Budget on the right side of the equation and simply estimate Revenue as a function of Bechdel rating and controlling for Budget?

We get very different findings. Now there is a significant and positive relationship between Budget and Revenue, as we'd expect. Furthermore, there is also a significant and positive relationship between Bechdel criteria and Revenue. Better roles for women translates into better revenue, even controlling for the fact that bigger budgets also create more revenue. This model also explains approximately 24% of the variance, versus 15% in the article's model suggesting it's doing a better job modeling the relationship of Bechdel test and financial performance.

At this point I've found some evidence that there are systematic differences how movies that pass or fail the Bechdel test have different budgets, different earnings, and different reviews. But I've also included only a handful of relevant variables to explain the relationship when there may be others that explain it instead. You may be thinking to yourself that X, Y, Z probably explains the patterns we're observing and these models just aren't capturing it. So let's throw everything and the kitchen sink into the model and see what happens.

The significant differences seen above in the IMDB scores seem especially problematic, so let's try to see if adding other variables makes this finding change in direction or significance. In addition to adding Revenue, Budget, Metascore, and number of IMDB votes, we add several new variables as controls that we hadn't included before. The result is a model that has the potential to be overfitted, but we're not interested in the combined model for any predictive purposes --- only when the estimate for the Bechdel dimensions has changed.

  • MPAA Rating. People dislike G-rated movies that happen to pass the Bechdel test more, perhaps.
  • Runtime. Instead of people hating "feminist" movies, maybe movies passing the Bechdel test are just longer and people don't like 2-hour marathons.
  • Genre. Maybe some genres like romantic comedies or dramas have an easier time passing the Bechdel test.
  • Year. There may be a nostalgia effect of movies in the past that pass the test being rated differently than movies released more recently that pass the test.
  • Week. Summer and holiday blockbusters are different animals than awards vehicles that are released in the fall and winter.
  • English language. "Seriously, who likes strong female leads and subtitles? Get me a Bud Light Lime and let's fire up Michael Bay's magnum opus Transformers!"
  • USA. As bad as it may be here, other countries may have it worse.

There are certainly others that also may explain patterns we see, some of which might be relevant such as the reputation of the cast or crewmembers or others that may not be relevant such as the fraction of seats Republicans hold in the House. Neither the relevant nor unlikely variables are present in the data above, but you're welcome to collect them and integrate it into the analysis below -- that's what's so great about making the models, data, and findings open for others!

The four main findings from this analysis of the effects of women's roles in movies are summarized in the chart above. Even after controlling for everything in the "kitchen sink" model, compared to similar movies that pass none of the requirements, movies passing the Bechdel test (the red bars):

  • Receive budgets that are 24% smaller

  • Make 55% more revenue

  • Are awarded 1.8 more Metacritic points by professional reviewers

  • Are awarded 0.12 fewer stars by IMDB's amateur reviewers

These four points point to a paradox in which movies that pass an embarrassingly low bar for female character development make more money and are rated more highly by critics, but have to deal with lower budgets and more critical community responses. Is this definitive evidence of active discrimination in the film industry and culture? No, but it suggests systemic prejudices are contributing to producers irrationally ignoring significant evidence that "feminist" films make them more money and earn higher praise.

The data that I used here was scraped from public-facing websites, but there may be reasons to think that these data are inaccurate by those who are more familiar with how they're generated. Similarly, the models I used here are simple Stats 101 ordinary least squares regression models with some minor changes to account for categorical variables and skewed data. There are no Bayesian models, no cross-validation or bootstrapping, and no exotic machine learning methods here. But in making the data available (or at least the process for replicating how I obtained my own data), others are welcome to perform and share the results such analyses --- and this is ultimately my goal of asking data journalism to adopt the norms of open collaboration. When other people take their methodological hammers or other datasets and still can't break the finding, we have greater confidence that the finding is "real".

But the length and technical complexity of this post also raise the question of, who is the audience for this kind of work? Journalistic norms emphasize quick summaries turned around rapidly with opaque discussions of methods and analysis and making definitive claims. Scientific norms emphasize more deliberative and transparent processes that prize abstruse discussions and tentative claims about their "truth". I am certainly not saying that Hickey should have the output of regression models in his article --- 99% of people won't care to see that. But in the absence of soliciting peer reviews of this research, how are we as analysts, scientists, and journalists to evaluate the validity of the claims unless the code and data are made available for others to inspect? Even this is a higher bar than many scientific publications hold their authors to (and I'm certainly guilty of not doing more to make my own code and data available), but it should be the standard, especially for a genre of research like data journalism where the claims reach such large audiences.

However, there are exciting technologies for supporting this kind of open documentation and collaboration. I used an "open notebook" technology called IPython Notebook to write this post in such a way that the text, code, and figures I generated are all stitched together into one file. You're likely reading this post on a website that lets you view any such Notebook on the web where other developers and researchers share code about how to do all manner of data analysis. Unfortunately, this was intended as a word processing or blogging tool, so the the lack of features such as more dynamic layout options or spell-checking will frustrate many journalists (apologies for the typos!). However, there are tools for customizing the CSS so that it plays well (see here and here). The code and data are hosted on GitHub, which is traditionally used for software collaboration, but its features for others to discuss problems in my analysis (issue tracker) or propose changes to my code (pull requests) promote critique, deliberation, and improvement. I have no idea how these will work in the context of a journalistic project, and to be honest, I've never used them before, but I'd love to try and see what breaks.

Realistically, practices only change if there are incentives to do so. Academic scientists aren't awarded tenure on the basis of writing well-trafficed blogs or high-quality Wikipedia articles, they are promoted for publishing rigorous research in competitive, peer-reviewed outlets. Likewise, journalists aren't promoted for providing meticulously-documented supplemental material or replicating other analyses instead of contributing to coverage of a major news event. Amidst contemporary anxieties about information overload as well as the weaponization of fear, uncertainty, and doubt tactics, data-driven journalism could serve a crucial role in empirically grounding our discussions of policies, economic trends, and social changes. But unless the new leaders set and enforce standards that emulate the scientific community's norms, this data-driven journalism risks falling into traps that can undermine the public's and scientific community's trust.

This suggests several models going forward:

  • Open data. Data-driven journalists could share their code and data on open source repositories like GitHub for others to inspect, replicate, and extend. But as any data librarian will rush to tell you, there are non-trivial standards for ensuring the documentation, completeness, formatting, and non-proprietaryness of data.
  • Open collaboration. Journalists could collaborate with scientists and analysts to pose questions that they jointly analyze and then write up as articles or features as well as submitting for academic peer review. But peer review takes time and publishing results in advance of this review, even working with credentialed experts, doesn't imply their reliability.
  • Open deliberation. Organizations that practice data-driven journalism (to the extent this is different from other flavors of journalism) should invite and provide empirical critiques of their analyses and findings. Making well-documented data available or finding the right experts to collaborate with are extremely time-intensive, but if you're going to publish original empirical research, you should accept and respond to legitimate critiques.
  • Data omsbudsmen. Data-driven news organizations might consider appointing independent advocates to represent public interests and promote scientific norms of communalism, skepticism, and empirical rigor. Such a position would serve as a check against authors making sloppy claims, using improper methods, analyzing proprietary data, or acting for their personal benefit.

I have very much enjoyed thinking through many of these larger issues and confronting the challenges of the critiques I've raised. I look forward to your feedback and I very hope this drives conversations about what kind of science data-driven journalism hopes to become.