Skip to main content
ABC News
Finally, a Formula For Decoding Health News

Here are a few headlines from the health pages of major news organizations in the past month:

In general, health headlines are advertising. The goal is to get you to read the article, not necessarily to represent the research accurately. The relationship is roughly that between an advertisement for a Big Mac and the actual burger — while it may taste good, it’s never as juicy and fresh as the one in the ad.

What’s your immediate reaction to these headlines? Do you believe them? Most people have a personal filter for determining which health information to trust and which to ignore. Take me as an example: Based on my gym habits, my gut feeling tells me that working out only a few minutes a week probably won’t get me fit. A “gut feeling” is a good start when evaluating health news, but it can lead you astray. Your gut reaction to a headline may say it’s unbelievable, but if the scientific study is large, well done and conclusive, you may not want to go with your gut.

To keep on top of the right health information for you and your family, you don’t necessarily need to know about medicine. What you do need to know is how to use data to make health decisions. As a statistician, I use a simple computation based on Bayes’ rule to combine my gut feeling about a piece of health news with information about the study it comes from. The result gives me a better idea of how much to believe a given headline.

This is not a definitive way to tell whether a headline is right — you’d have to perform serious scientific inquiry to know for sure — but I find it a pretty useful exercise.

Bayes’ rule boils down to a simple formulaPosterior odds = (Prior odds)(Likelihood ratio)

So, I update my belief about the odds of the study being true based on how strong the evidence for the headline is in the clinical study. Mathematically, the relationship is written:

medjournal equation


Final opinion on headline = (initial gut feeling) * (study support for headline)

In the equation, the final opinion about the headline and initial gut feeling are expressed as odds. If you think the odds the study is true based on your gut are 4 to 1, then your initial gut feeling will be 4. If you think the odds are 1 to 10 against the study being true, then your initial gut feeling will be 1/10. Each of the numbers is bigger than zero and a value of one is neutral. Numbers assigned to initial gut feeling between zero and one mean you tend not to believe the headline; the smaller the number, the less you believe it. Numbers bigger than one mean you tend to believe the headline; the bigger the number, the more you believe it.

Your initial gut feeling is personal to you — it’s your instinct. To figure out how much the study supports the headline, you need to skip the headline and go to the source. You can read many medical journal articles for free on a journal’s website or on aggregators such as Pubmed Central. If you see a health headline that interests you, first scan the news article and find the link to the original scientific article describing the result.Google Scholar or Pubmed. You’re looking for research articles published recently and with some of the authors matching names of people quoted in the news report.


Once you have the original research article in hand, you’re looking for a few key pieces of information. These are obviously not the only important characteristics of a study, but they cover a lot of ground. They are:

  1. Was the study a clinical study in humans?
  2. Was the outcome of the study something directly related to human health like longer life or less disease? Was the outcome something you care about, such as living longer or feeling better
  3. Was the study a randomized, controlled trial (RCT)?
  4. Was it a large study — at least hundreds of patients?
  5. Did the treatment have a major impact on the outcome?
  6. Did predictions hold up in at least two separate groups of people?

You can usually find these pieces of information in the abstract — the scientific equivalent of the CliffsNotes or tl;dr. If not, you may have to scan the rest of the article to find them. “Yes” answers increase the chance the study is relevant to you. “No” answers don’t mean the research isn’t important! They just mean the study might not be conclusive enough for you to share it with your mom on Facebook.

Now that you’ve decided your gut feeling and looked up the research article, you can build your own statistical filter for analyzing health information in the news.

Think about this headline: “Hospital checklist cut infections, saved lives.” I’m a pretty skeptical person, so I’m a little surprised that a checklist could really save lives. I say the odds of this being true are 1 in 4, so I set initial gut feeling to 1/4. Because this number is less than one, it means that initially I’m less likely to believe the study.

Now we find the research article behind the headline. In this research, intensive care units (ICUs) at Michigan hospitals implemented a new strategy for reducing infections through training, a daily goals sheet and a program to improve the culture of safety in the ICUs. The doctors measured the rate of infection before and after implementing this safety program. Let’s look at this study with our own checklist.

  1. The study was done in humans in ICUs. ()
  2. The outcome was the rate of infections after surgery — according to the article, these infections cost U.S. hospitals up to $2.3 billion annually.  ()
  3. The study compared the same hospitals before and after a change in ICU policy. This is an example of a crossover study, which is not as strong as a randomized trial but does account for some of the differences among hospitals because the same ICUs were measured before and after using the checklist. (-)
  4. The study looked at more than 100 ICUs over 1,981 months. In total, it followed patients for 375,757 catheter-days. (A catheter-day means watching one patient for one day while she is on a catheter.) This is a huge number of days to monitor patients for potential infections. ()
  5. The study showed a sustained drop of up to 66 percent in infections. ()
  6. The study looked at 103 hospitals in Michigan. ()

So, a large study showed a major drop in infections — that is directly medically important. For the sake of the exercise, let’s multiply by two every time we see a “yes” answer and by 1/2 every time we see a “no” answer. I would say this study’s result is about 16 times more likely (five out of six “yes” answers — 2*2*2*2*2*(1/2) = 16) if checklists really do reduce infections than if they don’t. I set study support for headline = 16.

Then I multiply to get final opinion on headline = 1/4*16 = 4, also expressed as 4/1. I would say that my updated odds are 4 to 1 that the headline is true.  The strength of the study won over my initially skeptical gut feeling.

Let’s try another headline: “How using Facebook could increase your risk of cancer.” Without looking at the study, I’d probably think “no way.” To my mind, the odds that this is right may be something like 1 in 10, so I set initial gut feeling = 1/10.

Now I find the research article behind the headline. In this study, the doctors gave a group of patients who had cancer or benign tumors a survey measuring their social support networks. They also drew blood from the patients and measured the number and percentage of different cells that have positive or negative impacts on people with cancer. Let’s apply the checklist.

  1. The study was done in humans ()
  2. The outcome(s) are the levels of specific types of cells in the body and relate indirectly to medical outcomes in patients. (-)
  3. The study was not a randomized trial. Researchers didn’t randomize some patients who use social media versus patients who do not. (-)
  4. The study only looked at 63 patients. (-)
  5. Social support only has a modest effect on the percentage of the beneficial cell type.3 (-)
  6. Researchers only looked at one set of patients. (-)

This study meets only one of our criteria: It was a clinical study in humans. So I think this study doesn’t clearly show that Facebook causes cancer, as the headline proclaims. Multiplying by 2 for every “Yes” answer and 1/2 for every no answer, I get study support for headline =  1/16 (5/6 “No” answers so 2*(1/2)*(1/2)*(1/2)*(1/2)*(1/2). Now I multiply the numbers together and get something like final opinion on headline = (1/10)*(1/16) = 1/160. After looking at the details of the study, my updated odds are 160 to 1 against the headline being true, because the study wasn’t strong enough to overcome my gut feeling.

At some point, the trickle of data about you, your friends and the world started affecting every component of your life. Almost every decision you make is now based on information you have about the world around you, including health news. Putting numbers to a gut feeling or to the strength of a medical study isn’t always easy, and the Bayes’ rule idea I proposed isn’t the only way to evaluate headlines. But it does tell us that headlines are only half the story.


  1. Bayes’ rule can be written formally in several ways. In this article, we are using the “odds formulation” of Bayes’ rule.

    Posterior odds = (Prior odds)(Likelihood ratio)

    So, I update my belief about the odds of the study being true based on how strong the evidence for the headline is in the clinical study. Mathematically, the relationship is written:

    medjournal equation

  2. In general, you should probably ignore any health news article without a direct link to the original research. But if you can’t find a link, another option is to run a search on Google Scholar or Pubmed. You’re looking for research articles published recently and with some of the authors matching names of people quoted in the news report.

  3. From Table 4 in the paper, social support was associated with an increase of 0.073 percent in NKT cells in tumors and no change in the total number of cells.

Jeff Leek is associate professor of biostatistics and oncology at Johns Hopkins Bloomberg School of Public Health and co-director of the Johns Hopkins Specialization in Data Science. He writes for the blog Simply Statistics.