Skip to main content
ABC News
What COVID Positivity Rates Can (And Can’t) Tell Us

The COVID-19 pandemic has brought up tons of data questions — what information should we be collecting, what can it tell us (and what does it fail to tell us) and how should the data we use to make decisions be communicated to the public? We’ve started a new series, “COVID Convos,” that brings these questions to the forefront through interviews with the scientists and practitioners who produce and use data on COVID-19.

Each conversation will be a chance for a different scientist to highlight a dataset or data question they think is particularly important and tell us why. This edition features a conversation with Jennifer Nuzzo, an epidemiologist at the Johns Hopkins Coronavirus Resource Center, which was conducted on February 2. Our discussion, which has been condensed and lightly edited, focused on the “test positivity” metric that seems to be everywhere during a COVID surge, and what it can and can’t tell us.

Maggie Koerth: Today, we’re talking about test positivity calculations and how they’re used and misused. I want to start things off by reminding all of our readers — and maybe myself — of what exactly test positivity rates even are. 

Jennifer Nuzzo: When we started tracking test positivity, we did so exclusively to answer the question, “Are we testing enough?” If you remember back to the beginning of the pandemic, there were all these stories about how many thousands of tests we were doing per day. There were all these proposals out there for how many tests the United States needed to be doing. And they literally ranged from the tens of thousands to the tens of millions. And we were like, “Well, which one is it?”

The more my colleagues and I thought about it, we realized that the amount of testing we would have to do would change based on how many infections we think are current. The closest that we could come to getting at that (with the data that were available at the time) was positivity. Initially, the only thing we could do was to look at the number of positive cases versus the number of tests. And we just looked at that ratio. 

It became clear that the positivity rate was better than counting the numbers of tests because there are places like Taiwan where the number of tests they were doing was very low, but they also, for a long period of time, had under 500 total cases. At some point, you run out of people to reasonably test if your prevalence of infection is so low.

Really this was to answer the question, “Are we testing enough?” — not to do a backdoor calculation of prevalence, because you can get a different positivity rate depending on who you test. And at various points in the pandemic, the people who were getting tested were different. They still are. Positivity, unfortunately, became misinterpreted as a proxy for prevalence, but it never was.

Maggie Koerth: What do you mean that the people who were getting tested are changing? 

Jennifer Nuzzo: If we only test the people who are really sick — say, the people who are showing up at hospitals that require admission — we’re going to have a very high positivity because the probability that those people have this virus is very high. We saw that in New York City in the early days when we were looking at their test positivity, where it was close to 50 percent. Now, at that point, it wasn’t that 50 percent of the population was infected, it was just that you had to be really darn sick in order to get a test in New York City.

Maggie Koerth: You said that there are multiple ways to calculate the test positivity rate. Can you talk about that a little bit? 

Jennifer Nuzzo: So the CDC has offered three different ways. And they’re all imperfect. You could do…  

1.) Of the number of PCR tests that you have done on a given day, what percentage is coming back positive? (That’s probably a fairly straightforward calculation.)

2.) The number of people who tested positive in a day compared to the number of tests performed. (Now anyone who’s mathematically minded would say, that seems weird because you have a unit in the numerator that’s different than the unit in the denominator. The reality is that may be the only data that is available.) Or…

3.) Of the people who were tested today, how many of them tested positive. 

Now, at the beginning, we were very interested in that third category because testing was so constrained that hardly anybody was able to get tested. And the people who were getting tested were largely people in the hospital. And those people were getting repeat tested a lot. You really only care about the time when they turned positive; you don’t care about the subsequent positives. And so those repeat tests, from a public health perspective, don’t tell us much. 

Now, though, testing has changed. We do testing for other reasons, including screening tests by employers. So the reality is you want to know all of those metrics and calculations, in part because they each tell you something different. The challenge is that the data states report is literally all over the place. You basically can’t get a single calculation for every single state that is a pure apples-to-apples comparison.

Maggie Koerth: Why do we keep making these national and international maps of positivity rates, then?

Jennifer Nuzzo: Well, I think it’s important to get a snapshot of where generally we are, even if it’s imperfect. This is a national crisis. So we need a national map. But you shouldn’t in any way tie high-consequence decisions to any particular metric, particularly if you don’t acknowledge its flaws. And one of the things that we saw was that states like New York were using positivity metrics from different states as the basis for putting states on interstate quarantine lists. If we had standardized data for all 50 states, and if we were consistently testing approximately the same populations, that would be telling us something different, but that’s not what we have right now. 

What we are trying to understand is how wide of a net are we casting to find infections and count them as cases. It helps us interpret our case numbers. It helps us try to understand what we’re likely missing and which communities may not be as well-served as others. And my frustration is that it’s never been used as the operational metric it’s meant to be. If you see positivity in an area that’s high, the first instinct is to ask how can we go out and blanket areas that may be in greater need with testing?

For the very few states that track testing with respect to race and ethnicity, we looked at the number of tests done per case in different racial and ethnic groups. And we did this because we know that there are disparities in representation among cases. There are disparities in representation among who gets hospitalized and who dies from this virus. So the question is, “Well, are there disparities and how wide of a net are we casting to find infections in these different racial and ethnic groups?”

And, unsurprisingly, there are. In Latinx populations, we’ve found there are fewer tests per case performed than compared to other racial and ethnic groups. What that tells us is that we are missing an opportunity to identify infections in these communities, so that people can know their status and try to protect their loved ones. We’re missing opportunities to connect infected people to care that could potentially save their lives. Testing is the start of that whole process of managing COVID.

Maggie Koerth: When we’re talking about higher positivity rates in African-American communities or Latinx communities, do we know that they actually have higher incidence rates? Or does that reflect that we’re just not testing those communities that much?

Jennifer Nuzzo: We’re pretty sure from multiple other pieces of data that there has been a disproportionate impact of the disease within BIPOC communities. But it’s also likely that lower access to testing has contributed to the spread by making it harder to connect people to care.

Maggie Koerth: So it sounds like we have gotten into this place where we see a high positivity rate and the reaction is that we need to institute mitigation controls by reducing the number of people who are getting COVID. But what we should be doing is testing more.

Jennifer Nuzzo: Part of how we mitigate is by casting a wider net to find infections so that we can stop transmission chains. And that’s the piece that’s always been missing. If I saw test positivity in an area go up I would say, “Why is it going up? Is it that the asymptomatic people are not showing up to get tested? Are people increasingly not wanting to get tested? Are they unable to get tested for some reason?”

I would want to go and make sure that community has access to masks so they can protect themselves at home, that they know how to access care if they need it. And, of course, we want to bring the infections down, but it also starts with knowing who’s infected.

Maggie Koerth: Scientists like your team introduced test positivity as a metric for whether we’re testing enough. How did it become the metric that we track transmission with?

Jennifer Nuzzo: Well, it’s used that way in other infections. Take the flu, for instance: We often talk about test positivity, but we don’t see flu testing quite as dynamic as COVID testing has been. We don’t usually test people without symptoms or particular epidemiologic reasons to think they’re infected. 

Maggie Koerth: So it’s not necessarily a bad way to use the metric in a broad sense, but it’s not working here.

Jennifer Nuzzo: Yeah. If the positivity goes up, that tells me something’s happening, that makes me worried. I think it’s likely due to increased infections. But I also look at the denominator, right? I’m trying to see what’s contributing to that change. How much is the numerator going up versus the denominator? Epidemiologists rarely look at one metric; we always look at multiple and we try to see if there are signals because our data is always imperfect. And that doesn’t mean it’s not useful. It just means that you have to do more triangulation than people may appreciate.

Maggie Koerth: So is there a metric that we should be using to compare transmission rates? 

Jennifer Nuzzo: I think you have to use multiple things. And I think the fairest comparison is to compare a state to itself or a country to itself over time and to try to understand what is driving those changes.

Now we’re in this problem where our infection prevalence has gone up so incredibly high with omicron and testing has been so incredibly constrained, and our very, very imperfect back of the envelope calculation of national test positivity, over 20 percent — really high. That positivity rate is all the people who showed up at testing sites somewhere. That’s not all the people who tested themselves at home. And we know that a lot of people are testing themselves at home these days because it’s literally the only option. So now we’re at yet another juncture where we’re saying, “Okay, well, what is our testing data telling us?” It’s different now than it was a year ago. It represents a different population.

We do not have a way to calculate positivity rate exactly the same way for all 50 states using their own data. States report data to the CDC, which allows the organization to show a single positivity calculation, but we’ve found variation between these numbers and what the states show on their own dashboards, probably because they are showing different calculations. At the best, it can lead to confusion. And yet we think it is absolutely important in a national crisis that we have a 50-state calculation because we need to know whether we as a country doing better or worse. So the alternative is not to just not use data, it’s to use data, but be very mindful of the ways in which it’s flawed and what you can and cannot tell for sure from the data.

Maggie Koerth: Are there any steps that state governments or the federal government could be taking right now, at this point in the pandemic, to make this metric work better?

Jennifer Nuzzo: Yeah, I mean, I still think it’s not too late for data standards, telling states how best to use testing at this point. I don’t think that we’re going to continue this level of testing forever. But there is more strategic testing that we could do to answer questions like, “What is the prevalence of infection?” That’s hard to discern when you just have a passive collection system (what we have), which is where you make tests available and you hope that people avail themselves of it. If you really want to answer that question, you sample strategically to try to get a representative sample. 

I would also like to see states making data more available at the local level, because while the national statistics are important for telling us how well the country is doing, you want hyperlocal numbers to know for sure what the risk around you is. We don’t interact with our entire state, we don’t interact with our entire county, we interact in social networks that are fairly fixed and clustered until we do something like travel across the country for the holidays. So we really need more hyperlocal data to be able to guide our own decision-making about navigating risk. And as we’re pulling away from state-based interventions, which I think is appropriate, we are effectively starting to leave it up to people to decide for themselves how they want to navigate risk. And we’ve not equipped people with the information to know what the risk around them is.

Maggie Koerth: For average people who see test positivity numbers, or even just case numbers for their state — what should they be taking away from that data? How can you use it in the form that it’s in now?

Jennifer Nuzzo: I always encourage people to pay less attention to the number and more attention to the direction of the number. So if the number is going up, that makes me more worried than when it’s going down. Obviously, if a number is insanely high, like 20 percent, that says a whole lot of people are infected and they don’t know it. So I use the data to decide whether I’m gonna wait another month before I take my kids to the aquarium. 

I want to make sure people don’t take away that the data is bad, so it’s useless. I think the alternative — using no data — is not an option. I think it’s using data, but being very humble about what the data is and what its strengths and weaknesses are and then using that to help you triangulate your way to the truth.

Maggie Koerth is a senior science writer for FiveThirtyEight.

Comments