Skip to main content
ABC News
Who Should Recount Elections: People … Or Machines?

Midterms 2018: Why election results are likely to be contested in tight races

You thought Tuesday would be the end of this midterm cycle? Oh, sweet summer child. Hundreds of close election results will likely still be contested after Election Day. Recount season is coming. And, with it, perennial debates about who should be doing the recounting. Is this a job for humans … or machines?

Back in 2016, when Green Party presidential candidate Jill Stein asked for a recount in several key states, it seemed clear that human hands were the preferred instrument for recounting the votes in an election. Stein specifically requested a hand count in her petitions. When the state of Wisconsin refused, she sued, arguing that the computerized optical scanner machines used by most districts in the state to count paper ballots were more prone to making errors in a count. Hillary Clinton’s lawyer agreed. And so did some computer security experts.

Political scientists say the answer is clear. “There are a lot of assertions out there, and I see them constantly being made, that machines are error-prone and humans are perfect in counting,” said Charles Stewart, professor of political science at MIT. “And that doesn’t bear out.”

Instead, as recounts become an increasingly normal part of the election cycle, experts are finding evidence that optical scanners are better at raw counting — but humans are still necessary because recounts are often about interpreting human behavior as much as they are about counting.


Recounts are becoming more common, said Edward Foley, director of the election law program at Ohio State University’s Moritz College of Law. It’s hard to count the exact number, though, because most recounts happen in local, rather than statewide, races, and because there are a lot of things colloquially called “recounts” that aren’t.

That said, there’s evidence that Americans are fighting over election results more than we did in the past. Richard Hasen, a professor of law and political science at the University of California, Irvine, came up with one estimate by searching a database of legal cases for the keyword “election” and variations on the word “challenge,” and then culling the results to remove cases that weren’t relevant. Because of this, his estimate is likely undercounting, but it shows a significant increase in challenged election results over time. There were at least 337 such cases in 2016. In 1996, he found 108.

These numbers represent three main kinds of disputes, Foley told me. First, candidates (and their lawyers) argue over what ballots should be counted and which should be thrown out as ineligible. Then, they argue over which candidate specific ballots should count for. Finally, they argue over whether all the eligible votes were counted correctly — the actual recount. Humans are much better than machines at making decisions around the first two kinds of ambiguous disputes, Stewart said, but evidence suggests that the computers are better at counting. Michael Byrne, a psychology professor at Rice University who studies human-computer interaction, agreed. “That’s kind of what they’re for,” he said.

The research bears that out. In 2004, Stephen Ansolabehere, a political science professor now at Harvard, published a study that looked at error rates over decades’ worth of recounts in the state of New Hampshire. The research grew out of the concern that optical scanning machines had contributed to the madness of the 2000 presidential election.1 But Ansolabehere found that the scanners were actually better counters than people.

In general election recounts between 1946 and 1962 — when everything was hand-counted, without even the aid of punch card machines — there was an average 0.83 percentage point discrepancy between the first vote counts and the recounts. In the 2002 general election recounts — by which point there were multiple jurisdictions using optical scanners — the average hand-count discrepancy was about the same — 0.87 percentage points.2 The average optical scanner discrepancy, in contrast, was 0.56 percent.

Earlier this year, Ansolabehere and Stewart did a similar study, looking at general election recounts in Wisconsin in 2011 and 2016. Again, they found that jurisdictions that did their counting by computer had the lowest error rates. Even jurisdictions that used computers for the election night count and humans for the recount had a lower rate of error than jurisdictions that used human counters both times.

That’s not to say that the time has come to fire all humans. All the experts I spoke with emphasized that, while computers count better, humans are still necessary — because we’re better at spotting the kinds of weird errors other humans make when they vote. You can program an optical scanner to “see” both a completely filled-in oval and an oval with an “X” in the middle, Stewart said. But voters are too creative with how they fill in ballot bubbles for every possible exception to be programmed into the system. He mentioned one case in Orange County, California, where a bunch of ballots were filled in using silver pens. “The optical scanners couldn’t pick it up,” he told me. “Somehow, they found out and were able to hand-count the votes.”

Likewise, there will always be ballots where the voter’s intention isn’t perfectly clear, either to computers or humans. “Lizard people!” said Ansolabehere, remembering an infamous ballot from the Minnesota’s 2008 Senate recount in which a man named Lucas Davenport appeared to cast a vote for both Al Franken and the aforementioned reptilians. A state board eventually decided to throw the ballot out. That’s not a decision computers should be making on their own.

There are also trust issues. “It’s hard to trust computers,” Byrne said. “You can’t observe the process.” That’s especially concerning given the computers’ known vulnerabilities to hacking, but it’s also compounded by the possibility of simple (but consequential) mistakes. This year, for instance, Texas has had multiple complaints from people who tried to vote straight-party tickets and found the machines were leaving some offices blank or filling in a candidate the voter didn’t want. The state says the problem is user error, while other reports have blamed the problem on aging voting machines. The biggest risks probably lie in direct-recording electronic voting machines, which have no real separate paper trail to argue over how to recount, Ansolabehere said.

But, surprisingly, the recounters whom voters trust the least are … human. Results from the 2012 Cooperative Congressional Election Study found that, out of 2,000 respondents, more than half thought it was easy to rig a vote with hand-counted ballots. Both optical scanners and direct-recording computers were trusted more.

In the end, despite the risks of computer counts, it’s ourselves who give us the heebie-jeebies.

Footnotes

  1. The 2000 election problems famously took place in Florida. Ansolabehere chose to study New Hampshire instead for several reasons. The state had a long, uniform reporting history of vote counts and recounts, it had a mixture of precincts using humans and optical scanning technology for recounts, and it had more elections than many other states, giving him a bigger data set to work with.

  2. One town, Bradford, had hand-count discrepancies so high that the authors calculated the average without it. If you add Bradford back in, the average discrepancy for hand counting becomes nearly 2.5 percentage points.

Maggie Koerth was a senior reporter for FiveThirtyEight.

Comments