Skip to main content
Menu
Coronavirus Models Were Always About More Than Flattening The Curve

Hey, remember March? It was a simpler time. Everyone had a loaf of fresh sourdough. “Animal Crossing” was all the rage. And across this great nation, Americans were breathlessly hanging on the latest epidemiological model updates.

But that was then. Now we don’t need complicated math to tell us that 190,000 Americans have died from the novel coronavirus, with no end in the sight.

Where once public debates about the reliability of one model versus another could command front-page headlines, today updated projections barely break into the news cycle. The virus goes on, the deaths continue to mount, but somehow public concern about predicting what will happen with the case count and death toll seems to have drifted off, like an aerosol droplet on a summer breeze. But the models are still there, quietly updating. Most of them agree we’ll have more than 200,000 deaths by mid September.

As the fear and restlessness of the early pandemic gives way to the steady ache of constant vigilance — and as we become increasingly resigned to death — is there still value in efforts to predict how many people will succumb by a certain date?

Experts told me “yes.” But they also told me “no.” And “sort of, it depends.” Turns out, it’s not only difficult to make a good model of a pandemic, it’s also pretty hard to use those models effectively. What the models are telling us now — what they’ve always been telling us — is not, necessarily, the job we asked them to do at the beginning of this pandemic.

Models are still factoring into officials’ decision-making, experts told me, especially the models that combine information about how a disease works with demographic and behavioral data. (These are called “mechanistic models” in forecaster lingo.) These models are especially useful for playing out “what-if” scenarios — if we intervene in this way, would the outcome be better or worse than if we intervene in that way? The Imperial College model you heard about a bunch back in the spring is a good example of a mechanistic model.

The other kind of model is a statistical one, which uses data on things like death counts and hospitalizations over time to forecast how many future deaths are likely under current conditions. The Institute for Health Metrics and Evaluation (IHME) model, once the favored model of the Trump Administration, works this way.

In the spring, attention was largely focused on statistical models and what they could tell us about whether the pandemic would truly be a big deal in our lives. Now, with that question pretty solidly answered (thank you very much), researchers are primarily using mechanistic models to understand the past (like, say, what might have been behind various dips and surges in transmission) and help make decisions about what to do next (for example, what guidelines should a university follow to safely reopen classes).

That there is a difference between the two types of models, and that they do different things, has largely been lost on the general public, said William Hanage, a professor of epidemiology at Harvard. That helps account for the way it feels like models have stopped being important, he said, even though they haven’t. The focus just turned to different goals.

Currently, states are using models to explore potential outcomes of opening certain kinds of businesses, said Carter Price, a senior mathematician at the Rand Corporation. Universities use them to decide who should be getting tested for COVID-19 and how often. Models can also help answer questions like “Who should be at the top of the list for receiving a coronavirus vaccine?” Hanage said. Statistical death count projections are still being done, though experts told me they’re most accurate and useful in the short term — think two to four weeks out.

But the answer to the question of whether models still matter is really yes and no, said John Drake, director of the Center for the Ecology of Infectious Diseases at the University of Georgia and a past FiveThirtyEight contributor. And that’s largely because of political choices.

It’s sort of like the “if a tree falls in the forest” question: If a model tells you something bad is likely to happen and politicians ignore it, does the model really matter?

The way COVID-19 has become politicized — with partisan divides on a broad variety of issues, including mask wearing, stay-at-home orders and whether the pandemic itself is a myth — has meant some governments have made decisions that fly in the face of empiricism rather than being informed by it. Drake’s home state of Georgia, for example, lifted its stay-at-home-order at a time when models suggested it should stay closed for another two months.

And then there’s the other, less-obvious problem in the clash between models and politics — data collection. “The data are appalling,” Hanage said. What he means by that is that the United States never established a consistent, sustained, national system to test residents for COVID-19 and trace who was in contact with people who tested positive. Testing rates go up and down. The turnaround for results lags and speeds up and lags again. Both factors are different in different states — even in different parts of the same state — and at different times. And centralized data collection and analysis … just hasn’t happened. As a result, researchers don’t have good, reliable data on how the coronavirus spreads — which is fairly important if you’re trying to put together a model to show you ways to prevent the spread.

Those are political choices. Policy makers could have built a robust testing system. They could have implemented rigorous contact tracing. But they didn’t. And several experts told me that they believed the country’s springtime obsession with statistical modeling was part of why. “The early focus on modeling sucked a lot of air out of things that should have been focused on … less sexy things like data-collection issues,” said Alex Engler, a Rubenstein fellow specializing in governance studies at The Brookings Institution.

Without that data, ironically, the models have suffered. One thing Drake would love to know is how effective lockdowns were. We know they were effective, he told me. You can see that observationally just by looking at what happened with case numbers when lockdowns were implemented and removed. But in the midst of lockdown, a lot of other interventions also came into play — face masks, plexiglass screens at the grocery store, contactless deliveries, and more. How much of the effectiveness of lockdown was thanks to the lockdown and how much was thanks to those other things? “That’s an exquisite question for modeling to answer,” Drake said. “But we can’t do it because of our inconsistent data collection.”

We got hung up on the flashy promise of models, and then we made those models less useful by not collecting the boring stuff they needed to operate effectively. Scientists and politicians began this pandemic with almost no data on this coronavirus. The models were originally based on data from previous outbreaks of its cousin viruses, SARS and MERS, but scientists assumed that politicians would quickly muster the forces of public health to collect detailed, COVID-19-specific data.

Unfortunately, experts said, scientists really weren’t ready for their assumptions about governments — and people — to be dead wrong. And that’s a problem that carries over to the models.

All models are based on assumptions, and some of our assumptions about how people would behave during the COVID-19 pandemic weren’t correct. For example, early on, model designs were often based in part on the assumption that the rise and fall of cases in the U.S. would match the curves we’d seen in Wuhan, China, and the Lombardy region of Italy. Scientists assumed that curve was something intrinsic to the virus. But it wasn’t. Instead, Hanage said, the neat curve turned out to reflect extreme lockdowns and stringent travel restrictions — things the United States was not prepared to choose for itself. And so our curves have looked different, and the models were, at first, wrong.

And those wrong assumptions affect more than this one pandemic. The idea that travel restrictions and population-wide lockdowns don’t work was baked into existing epidemiological models before COVID-19 came along, said Alex Siegenfeld, a Ph.D. student studying modeling and social behavior at MIT. The assumption was that such measures would always be leaky and impossible to maintain — an idea I wrote about myself back in February. But that has proven to only be correct with some people, some of the time.

None of this is to say that modeling COVID-19 is pointless. It’s just that using a model effectively is sometimes harder than simply doing math. A model can seem relatively easy to put together, so easy that it’s not even much of an intellectual challenge. But people turn out to be really, really complicated. And even seven months later, that’s a lesson we’re still struggling to learn.

Maggie Koerth is a senior science writer for FiveThirtyEight.

Comments