Skip to main content
ABC News
Pollster Scorecard: InsiderAdvantage (Be Careful What You Wish For)

UPDATE: InsiderAdvantage’s Matt Towery has apologized to me, both privately and in the comments section at this website, which I sincerely appreciate.

As you might imagine, I’ve had a long couple of days since our new pollster ratings were released. I certainly don’t mind hearing from polling professionals, and some of the criticisms were well considered.

Nevertheless, a disturbing situation has come to my attention.

The polling firm InsiderAdvantage implies on the front page of its website that I consider them to be the among the “most accurate of all polling firms” and that I “relied on InsiderAdvantage” during the 2008 campaign. A screen capture of the front page appears below the fold.

In fact, I do not consider InsiderAdvantage to be one of the most accurate polling firms. On the contrary, I consider them to be one of the least accurate polling firms. Of the 63 firms to have released at least 10 polls into the public domain, they rank 62nd — next to last — in my pollster ratings.

The claim stems from a lecture I delivered at Fordham University on January, 22, 2009 during which I presented a PowerPoint presentation. A copy of the PowerPoint, which is in Office 2007 (.pptx) format, can be found here.

Fordham’s write-up of the presentation states that:

“Silver’s analysis showed that Zogby, AP-GFK and Insider Advantage were the most accurate of all polling firms, although the percentages separating them were small.”

This is evidently the basis for InsiderAdvantage’s claim, as to my knowledge they did not have a representative present at the Fordham presentation. If they had been at the presentation, or if they had contacted me at any point thereafter, they would have been disabused of the notion that I find them to be among the “most accurate of all polling firms”.

In the PowerPoint, I presented two versions of an “error analysis”. The first was termed a “simple error analysis”. In that version, I looked at the last poll from each firm with a median field date of 10/20/08 or later, for all applicable Presidential and Senate elections in the 2008 general election cycle. InsiderAdvantage placed third among the 17 firms that I listed in the “simple error analysis”.

However, immediately after presenting the “simple error analysis”, I also presented a “complex error analysis” that was “regression-derived” and which accounted for “the degree of difficulty in forecasting different states”. In that version, InsiderAnalysis placed eleventh of the 17 firms.

The “complex error analysis” is much closer to the method that we use in calculating our pollster ratings. It showed InsiderAdvnatage’s position dropping, among other reasons, because just one of the eight polls included from InsiderAdvantage was from a Senate race, and Senate races are more difficult to forecast than Presidential races. Once this was accounted for, InsiderAdvantage’s position dropped to the middle-to-low end of the pack.

A far greater problem, however, is that eight polls is an insufficient number to evaluate the performance of a polling firm. In fact, I stipulated this in the PowerPoint, which said that it “may take several elections to determine [the] best pollsters to [a] statistically significant degree”.

To give you some idea of how noisy the data is, and how little eight polls might tell you, consider what would have happened if, rather than setting the cut-off date at 10/20 in my analysis at Fordham, I had instead set it 24 hours earlier on 10/19. In that case, an InsiderAdvantage poll of Nevada would have been included, which projected a tied race when, in fact, Barack Obama won the state by 12.5 points. Had that poll been included in the analysis, InsiderAdvantage’s score in the “simple error analysis” would have dropped from 2.38 to 3.50, and they would have placed ninth of the 17 firms, rather than third.

More broadly, however, InsiderAdvantage’s problems do not stem from their polling in general elections, which has been somewhat below average — but unbiased and basically adequate. Instead, it stems from their polling in primaries, as is apparent from their Pollster Scorecard:

Note that, of the +1.38 rawscore that we give to InsiderAdvantage (positive rawscores are bad), 1.24 points is contributed from primary elections. This is because more than two-thirds of the polls that we have in our database from InsiderAdvantage are from primaries, and their performance has been poor there — about 1.8 points worse than other firms polling under the same conditions.

InsiderAdvantage had received a roughly average score in the May, 2008 version of our pollster ratings. There are several reasons that their score has deteriorated in this version:

— We identified seven InsiderAdvantage polls from the 2004 primary campaign, which we did not have in our database before. Results-wise, these polls were terrible, missing by 12.4 points on average. They alone account for about half of the unfavorable score associated with InsiderAdvantage.
— We also found, in accordance with the more sophisticated version of the analysis that I presented at Fordham, that their 2008 general election polling was in fact somewhat below average, rather than somewhat above average.
— We now account for all polls that a firm conducts within the 21 days prior to an election, rather than merely the last one. This compounded InsiderAdvantage’s problems because, for instance, they conducted four polls of the Democratic primary in North Carolina, all of which were poor –although it helped them in some other cases.
— We now account for how recently a poll was conducted in advance of an election. InsiderAdvantage tends to poll very late — often, right up until the night before the election; the average poll we have for them in the database was conducted 6 days ahead of the election, rather than 10 days for all other polling firms. Because the advantage associated with timely polling is fairly large in primaries, and primary polling constitutes the bulk of InsiderAdvantage’s output, the previous analysis had been mistaking more timely polling for skill.

The combination of accounting for significantly more data, and accounting for it in a more robust way, significantly and unfavorably impacted our characterization of InsiderAdvantage’s performance.

In an article at the Atlanta Journal-Constitution, InsiderAdvantage’s Matt Towery challenged me to “show [him] the races that we’ve missed”. One should be careful what one wishes for: InsiderAdvantage has in fact had a number of very conspicuous, and very large, misses. The following table shows the 15 largest misses from among the InsiderAdvantage polls in our database:

Consider that InsiderAdvantage, which has just 74 polls in our database, has 10 cases in which they missed the final margin between the candidates by 15 or more points. SurveyUSA, by contrast, has 11 such misses — even though they have 634 polls.

In fairness to InsiderAdvantage, we should note that this is not an entirely apples-to-apples comparison. InsiderAdvantage focuses on primaries, and moreover, Southern primaries, which are very difficult to forecast. However, our method accounts to the extent possible for the degree-of-difficulty that InsiderAdvantage faces, and nevertheless finds them to be significantly below average.

Polling firms should not cite any characterization I have of their polling other than words which I speak or write directly, and they should not cite any analysis other than the most current version of the pollster ratings. The current version of the pollster ratings are the sole official product that Nate Silver and FiveThirtyEight use to evaluate the performance of different polls in forecasting election outcomes.

I have no animus toward Matt Towery or InsiderAdvantage; I hope that their polling improves (although it has been poor, it has at least been on an upward trajectory), and I am sympathetic to them because Fordham University’s account of my presentation was misleading. However, I reserve every right to become less sympathetic, and would strongly advise them to immediately “cease and desist” using my name in connection with their marketing materials. I would also advise they and other polling firms be more cautious in the future.

Nate Silver is the founder and editor in chief of FiveThirtyEight.