Science Communication and the Hype Machine

11 min read Original article ↗

A few weeks ago, I wrote about an overhyped science news story claiming that brain cells on a chip learned to play Doom (spoiler: they didn’t).

I was shocked by the credulity of the reporting. A cursory read of the scientific papers or asking the researchers for any details would have seriously deflated the story. And yet the original viral news story contained none of these details, which led some readers to conclude they must have somehow hooked up the brain cells up to a camera and controller.

This isn’t what was done. A reinforcement learning algorithm was given direct information about the gamestate and played Doom by inputting their commands through a brain cell network. The brain cells acted a bit like a controller for the reinforcement learning algorithm. If they were helping instead of hindering the reinforcement learning algorithm play at all, they likely weren’t doing much—the brain cells on a chip struggle to learn simple mappings.

Images like this, along with overhyped stories, flooded the internet.

So what the hell was the science journalist who wrote the article doing? I understand giving the article a click-bait headline and then giving the important details in the article—at least then people who are engaged enough to read the article learn the truth. But what’s the point of a 600-word article that misleads more than it informs? Is it just a paycheck for the author? Don’t they take pride in helping people understand the science, or are they just doing whatever maximizes clicks?

The pattern of overhyped popular science articles can’t be blamed entirely on journalists. University press releases, which generally form the basis of the news stories, frequently exaggerate the research. One study found about 40% of press releases contain exaggerations. When the press release contains exaggerations, most press articles contain exaggerations. When the press release does not, most press articles do not.

Press releases aren’t the only issue, either. Often the exaggerations exist in the abstracts of papers as well.

So it isn’t just journalists doing the hyping, but scientists as well.

That’s certainly the case with the Doom project. It was put together by a private company, so obviously they have incentives to hype the project. But the video they released has no mention of the reinforcement learning algorithm that actually does the heavy lifting. Over on the Doom Neuron community they set up on Twitter/X, they actively suppress any pushback correcting the hype.

If I may indulge my cynicism here for a moment, this all makes sense. Researchers in general and those at a private company in particular have incentives to exaggerate the impact of their work. You’re more likely to get funding and attention if people think your work is important. Similarly, science publications need to draw attention to get readers, so surely they want their writers producing work that gets clicks.

The incentives are bad, but the picture isn’t totally bleak. For one thing, scientists are one of the most trusted groups in society—this is reassuring but also feeds into the hype issue.

I often hear the sentiment that scientists need to change their act because “just look at the state of public trust in science!”

As much as I think there’s lots of room for improvement in how scientists interact with the public and how science gets disseminated to the public, it’s worth taking a realistic look at how things stand. For one thing, it is hard to find another group in the United States that is more trusted by the public—only the military is trusted more to act in the best interests of the public.

For another, to the extent there is distrust in scientists, there’s really two major factors: 1) political partisanship, and 2) the Covid-19 pandemic.

This figure from Pew nicely sums it up:

So two somewhat unsurprising things: 1) confidence in scientists is lower among republicans than among democrats, and 2) trust has been lower since the COVID-19 pandemic, where public health scientists had to make a bunch of calls in a state of uncertainty.

I could say a lot about this, but for the purposes here, the point is the public generally trusts scientists, and the areas that science communicators need to be careful about are where science intersects with cultural identity and large-scale public issues. This raises important issues for scientists trying to tackle difficult polarized topics like climate change or vaccines, and there’s a lot to be said about that. But my concern here is different: While it’s good that scientists are trusted, it produces a secondary problem when this combines with the above incentive structure to hype things up.

Scientists, university press releases, and articles appearing in popular media often exaggerate findings. There are structural incentives for scientists to overstate the impact of their research, and for journalists to provide sensationalized versions of stories.

The high trust in scientists also makes it easier for various credentialed “science influencers” to exploit that perceived authority to spread sensationalized hype or blatant misinformation online.

To some extent this is self-undermining. Scholars have argued this overhyping erodes public trust, even when the hype isn’t believed. Suspecting a science communicator is exaggerating can lower your trust in science communicators.

If you scroll through social media, you’ll be exposed to a ton of science claims, ranging from click-baity headlines to influencers making unsupported wellness claims. I suspect if you took a random sampling the bulk of them would be exaggerations and straight up falsehoods.

However, the problem isn’t just one of individual actors and incentives. There’s a mismatch between the desire for news and the way science works.

Millions of peer-reviewed articles appear annually. Individual studies are rarely important. Every study has flaws, because every experimental method has flaws. Every measurement instrument is a flawed instrument. Every sample is a flawed sample. Every researcher is a flawed researcher. This isn’t to criticize science or researchers, it’s just to point out that, the vast majority of the time, a single study doesn’t prove anything conclusively. Science is a gradual process, with multiple different studies coming from different angles to cover the weaknesses of previous studies and converge on a result.

The unfortunate dynamic is that popular science writing tends to focus on individual studies—it isn’t “news” unless it’s “new”. Stories written about new studies tend to leave out important limitations of the studies, likely because that detracts from the story rather than making it more interesting.

In my experience, it’s much more common to run into science stories about single studies rather than about reviews. Systematic reviews attempt to pull together a bunch of different related research on a topic and give a high-level view of what is known and what is yet unknown. This is fertile stuff for science writing, begging to be picked up and turned into a story interesting enough for a curious audience. I don’t see much of that happening, though.

So we have this funny situation: scientists are trusted, science is mass produced, scientists are incentivized to hype their work, and science journalism is incentivized to at least go with the hype. The result is a deluge of science content, much of it of dubious quality.

But there’s good news: debunking works.

Specifically, the approach of stating the misconception, explaining what’s wrong with it, and providing the correct information has been shown to work across a wide variety of topics. Simply put, confronting misunderstandings head on seems to work. This is also true of neuromyths specifically.

But this leaves out what I think is another essential function of debunking: holding the journalists and scientists who overhype accountable.

I think it’s good and healthy for there to be an ecosystem of accountability. When there are so many incentives pushing individuals to exaggerate, one thing that can moderate that impulse is a strong group of peers who judge and call out those claims that go over the line. If a scientist expects to lose respect by publishing some fanciful exaggeration about their research, they’ll think twice about it. Journalists who write misleading popular press articles should also feel some judgment.

I see this as just part of the ecosystem that holds exaggerations in check. It not only keeps engaged members of the public better informed, but helps retain trust in science by acting as a constraint counterbalancing the incentives to hype.

Knowing you might be called out for saying something ridiculous is good. I’m happy to be a small part of the accountability mechanism of the science communication world. I’m also happy to go out of my way to form connections with professors and other experts in the things I write about—knowing I have various professors of neuroscience and philosophy reading my writing makes me a bit more humble about what I say. I’ve benefited when an expert has pushed back a bit to add some nuance to something I’ve said.

That said, not all science writing is, or should be, debunking.

Stefan Van der Stigchel is an attention researcher who decided he wanted to write a book for a popular audience. He believed his scientific knowledge could help people who wanted to improve their concentration, and there was a lot that he knew that the public didn’t. He also saw a lot of self-help books out there that were inaccurate and wanted to help people find more reliable information.

So he wrote a book. In an academic article he wrote about the experience, he talked about the twin difficulties of writing for a general audience. On the one hand, his audience is expecting definitive answers and clarity. On the other hand, he worried about his colleagues’ judgments over leaving out too much detail. In an area where the science is often provisional and it’s unclear how it translates to real life, he had to make decisions on what nuances to leave out and how to make his message clear without losing accuracy.

If you read the reviews of his self-help book, people weren’t happy. It has a low score on Goodreads (3.3/5), and here are some representative quotes from a few reader reviews there:

I was hoping for techniques and information on improving your concentration, instead it begins with a load of studies on memory.

[I]f you were hoping for techniques to build the muscles of focus, you’ll be profoundly disappointed.

[I] was looking for direction/action items on improving my own concentration and this book didn’t really have that.

There’s a real tension here between sticking to the facts and writing something satisfying for a general audience. As Stefan says, he was writing as a response to self-help books he thought weren’t science-based. He was attempting to put a better alternative out there. But by sticking closer to the facts, his book was less attractive than the less factual competitors that were willing to be more definitive. If you just want a book that will tell you the definitive one simple trick to make you concentrate better, you’re not going to want a book any reputable scientist would write.

There’s a real tension: science writers want to stick to the facts, but they are competing with books and articles that have no such constraint.

Scientists that take on the hard job of writing honestly for a general audience deserve more credit than they get. If they don’t try, the space is just ceded to whoever manages to capture the public’s attention, with or without scientific accuracy.

I first got into reading science and philosophy when I was in high school. I would skip class to go to the bookstore, and it was there that I first encountered books like Stephen Law’s Philosophy Gym and Charles Darwin’s On the Origin of Species. These books filled me with wonder. They were idea books, giving me new concepts and perspectives to understand the world. They allowed me to look at things in the world that I took for granted in a new way.

Hyped up science stories that distort people’s understanding of the world undermine this wonder. Telling people “brain cells on a chip can play Doom!” might generate a quick “Oh wow!”, but then you move on. Even if it were true, it doesn’t lead to a deeper understanding. It’s a “neat fact” that doesn’t help someone understand the how or why. It doesn’t extend their island of knowledge.

When I think about who I write for, it’s for that teenager who skipped school to find wonder in the bookstore. The person seeking ideas and new ways of seeing the world and making an honest attempt at finding the wonder of legitimate understanding.

If you enjoyed this, please hit the “Like” ❤️ button, restack, or share this article to help others find it.

If you enjoy Cognitive Wonderland, consider supporting it by becoming a paid subscriber at whatever level feels comfortable for you.

If you’re a Substack writer and have been enjoying Cognitive Wonderland, consider adding it to your recommendations. I really appreciate the support.