Alzheimer’s research constantly makes the news, but sometimes, journalists covering new studies don’t go beyond the press releases, leading to misinterpretation. Others may misreport the facts altogether. Here’s what you need to know.
How does science go from the lab to the news stand?
Scientists design experiments to test an idea or hypothesis. Some experiments involve lots of test tubes and flasks; others involve recruiting people for various brain scans and blood tests. At the conclusion of these experiments, scientists crunch the numbers and write up their results. Then, the university writes up a press release which goes out to science journalists, and soon, you might be reading about their experiment in your news feed, sometimes with thrillingly clickable headlines like Mysterious chimpanzee behavior could prove they believe in God, or A harrowing study of 46,000 women shows hair dyes are heavily associated with cancer.
But just because they’ve completed a study doesn’t mean the scientists have discovered an unambiguous truth; nor does it mean their findings are “proven.” Most research is incremental and builds off of other work in the field — and it takes many, many experiments and a whole lot of research to consider the outcomes “proof” of anything.
When it comes to articles about brain health, dementia, and Alzheimer’s, the Being Patient editorial team spots overhyped or misrepresented study results all the time. This leads to headlines like “Vitamin D deficiency can lead to dementia” on an article about a study that actually found people with very low levels of the vitamin precursor are at the highest risk for developing dementia. It didn’t actually follow people over time to see whether people with low levels of this precursor ended up developing dementia. Another headline that made the rounds read “Daily multivitamin may keep your mind three years younger” to describe a study where the results were complicated and unexpected. But that doesn’t make a punchy headline.
According to a 2014 study, 40 percent of press releases provided exaggerated health advice, while one in three reported associations between different variables as causal — meaning, the articles leapt to the conclusion that because two factors may coexist, one causes the other, even though there could be many other explanations for their coexistence. The same study also found that more than one in three press releases exaggerated the results of animal research. (Being Patient always tries to make it clear that while some animal studies yield intriguing results, these results are never proof of anything in humans.)
Unfortunately, this overhyping tends to trickle down into the news articles on some of the most-read websites. That 2014 study also found that more than half the time, news articles that were based on inaccurate press releases reported exaggerated findings and conclusions.
So, what’s the best way to read science news, and catch exaggerated or inaccurate claims? Here are a few steps to follow to help avoid being misled by over-hyped — or even inaccurate — science reporting.
1. Find the source
First, check if the news article is covering a study that has been published in a scientific journal.
News stories usually cover studies after publication in a peer-reviewed journal. This means other scientists in the field have double-checked the veracity of the research. Articles will usually link the news story within the body of the text, or at the bottom of the page.
Many news stories cover clinical trials, which test whether new drugs work. Sometimes they discuss an ongoing experiment to help recruit more volunteers. Other times, companies release some results before the data is published and peer-reviewed. Take these articles with a grain of salt—they often feature comments from other scientists not involved with the trial for a second opinion.
Many scientists are publishing their research so that others can comment, read, and view it for free. No matter how exciting these studies may be, the research is not yet peer-reviewed. During the pandemic, a slew of inaccurate “preprints,” as they’re known, helped spread misinformation about vaccines. Take reporting on preprints with a heavy grain of salt.
What’s brewing in the news?
The table below describes three different types of stories that appear in science news.
2. Check for accuracy
News articles don’t always get things wrong on purpose. Sometimes, writers who aren’t subject matter experts in a certain scientific topic take an exaggerated or misleading press release at face value, reporting on the hype, not on the actual scientific results. But by asking a few questions of the news article you’re reading, you can assess whether it is exaggerating the study’s results or just misinterpreting them altogether. Here is your checklist of questions:
A: Was the study conducted in humans, animal models or cells?
Most of the time, the results of animal and cell studies don’t translate to humans. These experiments often test nutrients or drugs at very high, unrealistic concentrations.
B: Does the study or news release make very big claims about a breakthrough in humans?
Some articles or press releases overhype the findings of a study, clinical trial, or effects of a supplement or other intervention — even falsley implying it can cure, reverse, or substantially reduce the severity of Alzheimer’s disease altogether.
Often, this might be an intervention that you haven’t heard of before, or that hasn’t been talked about much in other news articles. Unfortunately, these the claims very unlikely to be true. Here’s why:
Editor’s Note: Currently, there is no drug that can stop Alzheimer’s or dementia, and no way to reverse the degenerative process. There is only one FDA-approved drugs that slow disease progression — Leqembi — and it and the other anti-amyloid Alzheimer’s drugs in the pipeline to FDA approval have been decades in the making. Scientists spent years developing antibodies that could hit their target, then these antibodies had to be tested in animal models, and then go through the rigorous process of clinical trials, which takes years. The earlier stages of the clinical trials are also often written up in news articles for years leading up to the news that a new drug or treatment is approved or found to be effective. But all this to say: The process is a slow one, and once a claim is made about an effective treatment or intervention, many other studies will need to independently verify the same findings. Breakthroughs do happen. But for something to be proven, it must be repeated again and again. Science is an act of consensus.
Thus, it is highly unlikely a new treatment will come out of the blue into headlines one day. Even if results are promising early on, bear in mind that 99 percent of Alzheimer’s drugs fail.
C: Does the news article mention factors that increase or decrease the risk of dementia?
To understand what lifestyle factors are a risk for dementia, researchers find patterns in large sets of medical data. For example, it may find that people who drink coffee are more likely to develop dementia ten years later. This doesn’t mean that the coffee causes dementia, or that you should stop drinking coffee. But this isn’t always made clear in news articles, which will report a causative link — and often sensationalize that link.
D: Does the news article or study look at relative or absolute risk?
Let’s go back to the coffee example and look at an imaginary headline:
“People who drink coffee twice a week are 20 percent more likely to develop dementia 10 years later.”
There are two types of risk scientists can present: relative and absolute. Relative risk compares different groups of people. It doesn’t signify the actual risk of getting a disease or condition and doesn’t explain which other variables the researchers have factored into this calculation.
The 20 percent is a relative risk in this case. This means that 20 percent more people who drank coffee developed dementia compared to the non-drinkers. It doesn’t say how drinking coffee could change the risk of getting dementia.
What about absolute risk? This tells a person how likely they are to develop a specific disease or condition. If there is a 1 percent chance the study population develops dementia, a 20 percent increase in risk raises this to 1.2 percent. If 1,000 people drink more than two cups of coffee, and 1,000 do not, that means 12 coffee drinkers will develop dementia compared to 10 non-drinkers.
With an absolute effect that is this small. It might have arisen by chance alone and may not be very meaningful. Don’t stop drinking coffee because a study in the news associated it with a higher risk of dementia.
E: Does the news article explain whether the results are meaningful?
Imagine a study finds a new Alzheimer’s drug led to significantly higher scores on a cognitive test at 12 weeks compared to the control group. What does this mean? The sentence doesn’t convey what doing better on a cognitive test means to a patient. It also doesn’t tell readers the size of the significant difference, how many people were involved in the study, or whether the experimental group scored higher at baseline.
If patients score 10 points higher at 12 weeks, is this a lot? That depends on the scale. Ten points out of a total 30 points is more meaningful than 10 points out of 100. News articles should have interviews with experts who contextualize the study in a layperson’s terms.
F: If the study is a clinical trial, is it randomized and double-blinded?
There are many types of clinical trials, and while they all have their uses, the results of some of these trials can be misleading. Usually the news article will describe the trial, but if not, the scientific article which is written up by the study authors will do so. The most reliable types of trials are randomized and double-blinded.
- Open-label: Patients in an open-label trial will know that they are receiving an experimental drug or intervention. These are often used to figure out the long-term effects of approved drugs. When used for more experimental interventions, open-label trials are less reliable. Since the patients know whether they receive the real treatment, they will show improvement due to the placebo effect alone.
- Single-blinded: The patients don’t know whether they receive the intervention or placebo. The clinicians will know who is receiving the treatment which may sway the outcomes.
- Double-blinded: Neither the patient nor the clinicians know who is receiving which treatment. This eliminates most of the bias.
- Randomized: The experimenters have no idea which person is assigned to which group. This prevents a researcher from “stacking the deck”. For example, putting all the healthier patients into the treatment group can inflate the benefits of a drug.
3. Remember: Science is rarely simple
To paraphrase the villain from The Incredibles, when every result is a breakthrough, nothing is.
When every finding is presented as an unambiguous success, people may start wondering why these “scientists” haven’t released the cure to dementia.
Scientists don’t always discuss the uncertainty in their results, leading to a lot of flip-flopping — one study finds coffee prevents Alzheimer’s while another study finds it increases the risk — which makes people trust science less which fuels conspiracies. Learning to spot the signs of exaggeration and hype can help people stay better informed about their risks.
I have recently been diagnosed with Dementia with a new doctor. My doctor recently retired & I was a year with no doctor.
Thank you for being here, Toddie. Remember, you’re not alone—there are many resources and communities that can offer support and guidance. Take care.