This paper in Management Science has been cited more than 6,000 times. Wall Street executives, top government officials, and even a former U.S. Vice President have all referenced it. It’s fatally flawed, and the scholarly community refuses to do anything about it.

13 min read Original article ↗

In a post entitled, “How Institutional Failures Undermine Trust in Science: The Case of a Landmark Study on Sustainability and Stock Returns,” Andy King (my collaborator on the project on scheduled post-publication review) tells a disturbing story of the failure of the scholarly publication process:

For a long time, I [King] resisted the accumulating evidence that our institutions for curating trustworthy science were failing.

I believed our academic gatekeepers–editors, reviewers, and research-integrity officers–were quietly doing their jobs. Overstretched, but nevertheless, curating a trustworthy scientific record and correcting it when problems appeared.

That belief ended when I attempted to replicate an extraordinarily influential article “The Impact of Corporate Sustainability on Organizational Processes and Performance,” by Robert Eccles, Ioannis Ioannou, and George Serafeim. The paper has been cited more than 6,000 times. Wall Street executives, top government officials, and even a former U.S. Vice President have all referenced it.

Uh oh . . . I have a horrible sense that I know what’s coming next:

It contains serious flaws and misrepresentations.

The article appeared in a prestigious journal, Management Science. The authors work at highly reputed institutions. As a result, I thought correcting the record would be straightforward.

I [King] ran into barrier after barrier.

OK, that doesn’t surprise me. I’ve had this sort of experience over and over. As the saying goes, it’s too hard to publish criticisms and obtain data for replication.

King continues:

The authors ignored me, the journal refused to act, and the scholarly community looked the other way. Two universities disregarded evidence of research misconduct–even after the authors admitted publishing a misleading report.

The article remains largely uncorrected–misleading thousands of people each year.

I believe our systems for curating trustworthy science are broken and need reformation.

Yup.

And now for the gory details:

The Authors

On September 11, 2023, I [King] emailed Eccles, Ioannou, and Serafeim to explain that I was attempting to replicate their study and had encountered serious problems:
• The reported method did not work as described.
• A key result seemed to be mislabeled as statistically significant when it was not.
• Some measures defied construction.
• Critical statistical tests appeared to be missing.
• The sample was highly unusual.
I explicitly acknowledged uncertainty and asked for help. Over roughly half a dozen follow-up emails, I shared progress updates and offered to collaborate.

I received no response.

My experience is not unusual. Bloomfield et al. (2018) show that requests from replicators are often ignored, delayed, or deflected. Because published articles frequently omit key details, authors can block replication simply by refusing to engage.

The Community of Scholars

I turned to colleagues and respected scholars for advice. I asked for help encouraging the authors to engage. I emphasized that mistakes happen–my own work is not unblemished–and that correcting errors strengthens, rather than diminishes, scholarly standing. I heard:
• “I can’t do anything–it would cause conflict.”
• “Your email is too long.”
• “I’m underwater for the next month.”
• “I’m too much of a coward.”
The last came from an internationally respected scholar with a chaired position at a top university. [Don’t worry, that wasn’t me — AG] I [King] appreciated the candor. It revealed an uncomfortable truth: much of social science operates on a culture of go-along, get-along.

“Once a paper is published… it is more harmful to one’s career to point out the fraud than to be the one committing it” (a different Bloomfield et al., 2018, link).

The Journal

Having received no response from the authors, I contacted Management Science. After getting advice, I submitted a comment.

It was rejected.

The reviewers did not address the substance of my comment; they objected to my “tone”.

Ahhhh, the tone police!

King continues:

They told me that published authors should be granted “discretion” in conducting their work and that replicators should tread very lightly. One reviewer was “inclined to turn down any invitation to review a revision” unless it was accompanied by a note from the original authors.

Knowing such a note would never come, I appealed. Rejected. I appealed again. Rejected.

The authors did admit to the editor that they had misreported a key finding–labeling it as statistically significant when it was not. The authors claimed the error was a “typo.” They intended to type “not significant” but omitted the word “not.”

Oh, I hate when that happens! So frustrating how the typos always seem to support the overblown claims being made.

King continues:

They did not address the implications of this “typo”–that it misrepresented the evidence for a central claim of the paper, that corporate sustainability increases stock returns.

I asked the journal to correct the record. Rejected.

My experience is not unusual. As one respondent told Bloomfield et al. (2018): “Replication studies don’t get cited, and journals don’t publish them. Nor do people get promoted for replication studies”.

The good news is that King and I are both too old to worry about getting promoted.

King continues:

Help from Outsiders: LinkedIn and an Upstart Replication Journal

I decided I needed to go outside the standard process and post publicly about the “typo” on LinkedIn.

Days later, I heard that the journal would publish a correction.

I was told the authors had submitted the correction before my post, but it had been misplaced and forgotten.

I believe the journal’s new editor found this news to be as incredible as I did. He quickly published an erratum.

I also submitted my replication to the Journal of Management Scientific Reports (JOMSR). This upstart publication was started in 2022 by a small group of courageous scholars who wanted to provide an outlet for replication studies like mine. I was impressed by their thorough reviews and tough guidance.

In spring 2025, JOMSR published my replication study.

Research Integrity Offices (Part 1)

While revising my replication for publication, I became convinced of a more serious issue: the method reported in Eccles, Ioannou, and Serafeim (2014) was not the method actually used. Worse, the true method could not support their “findings”.

I contacted the authors again. No response.

I decided a research integrity complaint was in order.

In July and August 2025, I submitted complaints to Harvard Business School and London Business School. I alleged that the reported method could not have been conducted as described–and that the results were therefore uninterpretable.

(A technical aside describing the study’s method may be useful here. Feel free to skip.)
• The empirical strategy in Eccles, Ioannou, and Serafeim (2014) rests on a demanding requirement: the “treated” and “control” firms must be so closely matched that which firm is treated is essentially random. The authors appear to recognize this, reporting that they used very strict matching criteria “to ensure that none of the matched pairs is materially different.”
• Despite their strict criteria, they also claim to have achieved remarkable success in finding precise matches, reporting that 98% of their “high sustainability” firms could be matched with a near-twin “low sustainability” firm. Yet when I attempted to replicate the study, I achieved a much lower match rate–fewer than 15%. To better understand the discrepancy, I conducted a probability analysis using a Monte Carlo simulation. I determined that the reported matching success was highly unlikely–many, many, many times less than winning the lottery.
• Either their matching process was precise, in which case they would not have enough pairs to run their analysis, or it was loose, in which case their analysis could not be interpreted.
(End of aside.)

Shortly after I submitted my complaint, the authors acknowledged they had misreported their method.

But they did not ask Management Science to correct the text of their article.

Research Integrity Offices (Part 2)

Eccles, Ioannou, and Serafeim explained that the misreport was an unfortunate accident. There had been two studies, they said, and the false description belonged to an “exploratory” study that was later removed to satisfy length requirements, except the sentences describing its matching process, which were inadvertently left behind. As a result, those sentences now appeared to describe the “main” analysis, but that is not what they had intended. It might look like misrepresentation, but it was just an editing error.

They did not explain that this meant all of their results were uninterpretable.

The explanation also conflicts with the record.
• The incorrect claim appears in the earliest available draft of their article–marked “NEW!” on HBS’s site.
• Over several later drafts, the false claim was retained and even edited, rather than removed.
• The “exploratory study” does not appear in any available draft.

In light of these inconsistencies, I submitted a revised complaint to Harvard Business School and London Business School.

Harvard Business School responded: “Whether or how the School does or does not move forward… will not be communicated to you.”

LBS was more open and responded quickly, concluding that the false claim was not an “intentional falsehood”. Why? Because the LBS professor (Ioannou) “did not have access to the raw data and did not conduct the analyses in question.”

That’s technically known as the “Ariely defense.” You’re the author of the paper but you didn’t touch the data, therefore you couldn’t possibly have cheated.

And then we get something we’ve heard many, many times before:

And in any case, the problem was of a “minor nature”, apparently because it pertained to some other study and thus did “not impact the main text, analyses, or findings.”

It’s funny how removing these fraudulent or erroneous analyses never affect the main conclusions of the study. It kind of makes you wonder why they went to the trouble of gathering and analyzing the data at all!

King continues:

Sadly, LBS’s response is empty.
• Data access is immaterial. I did not allege data fabrication.
• The false claim is not minor. It is the difference between a usable and useless study.
• It does not address the central question: Did the exploratory study ever exist? If not, false statements were published twice–first in the article, and then in the offered explanation.

LBS did conclude that the author engaged in “poor practice”, which they planned to address through “education and training or another non-disciplinary approach.”

I suggest LBS begin by explaining an author’s duty to correct errors in published work.

Where This Leaves Us

Eccles, Ioannou, and Serafeim (2014) remains only partly corrected in the pages of Management Science. Diligent readers may discover the erratum correcting the “NOT significant” finding, but they will not learn of the misreported method in the pages of Management Science. Thus, thousands of readers remain misled.

Our institutions for curating trustworthy social science are not working. They must be changed, reformed, and revitalized.

What you can do

1. Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.
2. If you or someone else finds an error in your published work, publish a correction.
3. If one of your colleagues is behaving unprofessionally, tell them to stop.
4. Support replication. Encourage others to do so. Support the Journal of Management Scientific Reports.
5. Find out about the research integrity policies at your institution. If they are weak, strengthen them.
6. If you know Eccles, Ioannou, and Serafeim, ask them to retract their article, or at least publish another correction.

What else needs to change

For years, I studied industry self-regulation. The evidence is clear: it works only when it is transparent, independently monitored, and supported by graduated sanctions. Applying this to the curation of science.

1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
2. An independent third-party should audit the process.
3. Penalties should reflect the severity of the violation, not be all-or-nothing.
4. And to ensure the system works, we need what Andrew Gelman and I call FurtherReview.

Let me just add one more thing.

I don’t know any of the authors of the paper under discussion–indeed, I’d never known of them, or their paper, before hearing this story from King–so I’m speaking in general terms:

– Whether or not the authors were lying or intentionally misrepresenting at any point, I agree with King that, based on the evidence above, they did research misconduct.

– This doesn’t mean that the authors of that paper are bad people!

We should distinguish the person from the deed. We all know good people who do bad things, indeed I’ve received some speeding tickets in my time, and there are lots of good people who’ve done worse than that. I’ve been in the car with some drunk drivers, some dangerous drivers, who could easily have killed people: that’s a bad thing to do, but I wouldn’t say these were bad people. They were just in situations where it was easier to do the bad thing than the good thing

What Eccles, Ioannou, and Serafeim did is much less bad than my friends driving drunk, but it’s still bad, but the same principle applies. They’re living in a world in which doing the bad thing–covering up error, refusing to admit they don’t have the evidence to back up their conclusions–is easy, whereas doing the good thing is hard.

OK, actually doing the good thing is easy. You just admit your error. I’ve done it myself–it’s super-easy, you just contact the journal and write a short, direct, and honest correction, and they’ll publish it. But to lots of people, it seems hard. As researchers they’ve been trained to never back down, to dodge all criticism. I don’t like what they did, but I imagine that they view their actions as something like how I might view a speeding ticket: yeah, I shouldn’t have done it, but it happens in the past.

From that perspective, the real problem is not the sin but rather the mistaken attitude that, in science and scholarship, what’s past is past. There’s a horrible sort of comfort in thinking that whatever you’ve published is already written and can’t be changed. Sometimes this is viewed as a forward-looking stance, but science that can’t be fixed isn’t past science; it’s dead science. And what bothers me about Eccles, Ioannou, and Serafeim, and all the many error-deniers like them, is that they don’t seem to realize this. It’s this fundamental misunderstanding of the scientific and scholarly endeavor, more than the dishonesty or sloppiness or whatever is the specific unethical behavior, that bothers me.

But, yeah, Andy King has a point that when universities, journals, and other institutions support the bad behavior, that’s not good. That doesn’t help at all. In all seriousness, you gotta feel a little sorry for Harvard Business School: they’ve had so many of these scandals now. It’s not like Duke and MIT business schools, which just had one scandal each–actually it was the same scandal for the two of them.