Although the evidence continues to mount that there is something very funny about Strategic Vision’s process, for the most part there has not been too much reason to question their results themselves, which have tended to play it safe and straight down the fairway. Strategic Vision was rated as a pollster of roughly average accuracy in our pollster rankings, which were based on results through the 2008 primaries.
As Tom Jensen at Public Policy Polling notes, however, it would not be that hard to manufacture the results of an election poll. Just look up the average at RCP or Pollster.com, or 538, tweak upward or downward a couple of points depending on your whim, and you’re good to go. But once you venture outside of the bubble of electoral politics and into an area where you can’t copy off your neighbor, there is potentially more room for a dishonest pollster to get themselves into trouble. Here, then, are a few oddities from a poll that Strategic Vision recently conducted for an educational thinktank.
The poll in question comes from the Oklahoma Council of Public Affairs (OCPA), a conservative-leaning thinktank that recently commissioned Strategic Vision, LLC to conduct a poll of 1,000 Oklahoma high school students. (A similar poll has previously been conducted by Strategic Vision, LLC in Arizona). The poll asked ten relatively basic political knowledge questions that were drawn the U.S. Citizenship Test, such as: “How many justices are on the Supreme Court”.
Only 2.8 percent of Oklahoma’s high school students passed the test, claim OCPA and Strategic Vision, which is defined by having gotten at least 6 of the 10 answers right. Moreover, the results to some particular questions were strikingly low. Ostensibly, only 23 percent of the students correctly identified George Washington as the first President, and only 43 percent correctly named the Democrats and Republicans as the two major political parties (11 percent of the students, COPA and Strategic Vision claim, provided the answer “Communist and Republican”).
For me, some of these results don’t pass the smell test. I agree that public schooling in the United States needs to be improved, particularly in the areas of government and citizenship. But only 23 percent of high school students in Oklahoma knew that George Washington was the first President? Really? I have difficulty accepting that claim at face value. In 2008, 68 percent of Oklahoma fifth graders passed the Oklahoma Core Curriculum Social Studies Test. You can read some of the questions on that test beginning on page 50 of this PDF; they’re generally quite a bit more difficult than the ones that Strategic Vision asks. (For instance, “Which was the most profitable export of the Jamestown settlement?” and “Which group would most likely agree with ideas presented in Common Sense?”). So either those smart fifth graders were really forgetful by the time they got to high school, or there’s something very wrong with this poll.
But let’s put that aside for a moment and do an examination of the math. Here are the results of the poll, as taken from OCPA’s website:
When I first saw these results a couple weeks ago, they really got my spidey sense tingling. Forget about the overall level of knowledge being low — what I found strange was that there were no students, out of 1,000, who answered as many of eight out of the ten questions correctly. Isn’t there some total nerd in Tulsa, some AP Honors student in Stillwater, who was able to answer at least eight of these ten very basic questions correctly? The distribution seems to be too compact.
Let’s run a couple of simulations to test the robustness of these results. In the first simulation, I’ll assume that:
(i) the student body is homogeneous — everyone is as knowledgeable as everyone else, and
(ii) the questions are independent of one another; so knowing, say, who wrote the Declaration of Independence doesn’t make you any more (or less) likely to know what the Bill of Rights is.
These are completely unrealistic assumptions, which, as you’ll see, is the whole point.
But stay with me for a minute. To conduct the simulation, we’ll create 50,000 “students”, and they’ll randomly get the questions right or wrong based on the percentages in Table 1. So, when we ask, for instance, who was the first President of the United States, they have a 23 percent chance of correctly guessing George Washington and a 77 percent chance of getting the question wrong. Then we’ll add up the results for each student and see how they did.
When we do this, the results are strikingly close to the ones Strategic Vision produced in Table 2:
But, here’s the problem: these are not realistic assumptions. Students in public high schools do not all have the same achievement levels. Moreover, the fact of having gotten one question right almost certainly does have some bearing on your odds of getting another right.
So, let’s undertake a more realistic set of assumptions. To do, we’ll divide our simulated students into thirds. First, there’s a low-knowledge group; these students’ chances of getting each question right are diminished by 50 percent. Then, there’s a high-knowledge group; these students’ chances are increased by 50 percent. Finally, there’s a medium-knowledge group; these students’ chances are exactly as listed in Table 1. So, for instance, for the question about correctly identifying the two major political parties, students in the low-knowledge group have a 22 percent chance of getting it right, the medium-knowledge group a 43 percent chance, and the high-knowledge group a 65 percent chance.
If we simulate the results again with our now more heterogeneous student body, here is what we get:
In this case, the results provided by Strategic Vision do not do a very good job of capturing the likely distribution of responses. The simulated distribution is more spread out, with more students getting 0′s and 1′s but also more getting 6′s and 7′s, etc. Meanwhile, the peak around 2-4 answers correct is less prominent.
A slightly more robust procedure might be to assume that students’ aptitude is normally distributed. In this last simulation, we will assign a bonus or penalty to each student’s chances of getting the questions right, where one standard deviation is equal to a bonus or penalty of +/- 40% on the chances of getting a particular question right (so a student one standard deviation above the norm would have a 32 percent chance of getting the “first President” question right, rather than a 23 percent chance). This produces a graph very much like the last one:
I’m not sure if there’s any a priori way to know what the underlying distribution of responses “should” be. As a very rough guide, on the reading portion of the 2008 SAT, a single standard deviation corresponded to a difference of about 17 questions out of a 67 question test (ignoring the penalty for wrong answers), or about 25 percent of the total. The standard deviation implied by Strategic Vision on this citizenship test is only about half that — 1.3 questions out of a 10 question test, or 13 percent. But the two tests, of course, are not directly comparable.
It seems quite strongly possible, nevertheless, that the students polled for this survey don’t exist anywhere in Oklahoma but instead on a hard drive somewhere in Atlanta. This is a valuable exercise undertaken by the OCPA. But they owe it to the hardworking students of Oklahoma to make sure that their contractor, Strategic Vision, didn’t flunk its own citizenship test.