For a better browsing experience, please upgrade your browser.



Because the post I wrote Monday about a new tea party-related poll sparked quite of bit of controversy–not to mention insinuations that either I or the pollster are engaged in “hackery” of some form or other–I contacted the pollster, Dr. Christopher Parker# of the University of Washington’s WISER institute, so he could introduce himself to 538 readers, explain his methods and findings, and answer some of the respectful questions raised by thoughtful readers.

I wrote that post to provide a partial response to the emergence of a new meme–one deriving from, and citing as “evidence,” a recent Gallup demographic study–which asserts that, because tea party identifiers are similar in some (but not all) demographic characteristics to non-identifiers, their political views can therefore be taken as similar or “mainstream” or “not fringe” or “non-racist,” to borrow terms utilized by others in their interpretations of the Gallup results. Although people who are similar demographically often do have overlapping political beliefs, it’s fallacious to presume two groups with certain shared demographic characteristics automatically share the same attitudes. The way to determine that is to actually survey their attitudes, which Dr. Parker did.

In any case, here are the questions and answers from my email interview with Dr. Parker, with any emphases in the text belonging to him:

1. First, tell us briefly about your training and background and polling expertise.

I received my undergraduate degree in political science, cum laude, from UCLA. Under the direction of Michael Dawson (chair), along with John Mark Hansen and John Brehm, I earned my PhD in political science from the University of Chicago in 2001. I’ve since been on the faculty at University of California, Santa Barbara, and held a Robert Wood Johnson post doctoral fellowship at the University of California, Berkeley, where Jack Citrin was my principal advisor. On the principal investigator front, I conducted the California Patriotism Pilot Study (2002), from which I published a paper in Political Research Quarterly.

Since arriving at the University of Washington in 2007–where, though it doesn’t officially take effect until this Fall, I’ve been promoted with tenure to associate professor, and appointed the Stuart A. Scheingold Professor of Social Justice and Political Science–I have collaborated with my colleague, Matt Barreto, on the Washington Poll for the last three years (2007-2010), where he’s been the lead investigator. I have also published from this data in the Du Bois Review. We also collaborated on the current study, the Multi-State Survey of Race and Politics. This time, however, I assumed the role of lead investigator.

2. Next, tell us how you constructed the instrument and conducted the poll. Specifically, some readers want to know how questions were phrased and why you picked the states you did. There is also some confusion about how the feeling thermometer questions—about the degree to which respondents believe that African Americans or Latinos are “hardworking” or “trustworthy” or “intelligent”—were constructed. As I understand it, there is a precedent for this in the UMich/ICPSR American National Election Survey, but perhaps you could take a moment to explain to us how your questions conform or differ from that, if at all.

Much of the instrumentation for the survey was extracted from existing polls, word-for-word. For instance, the items assessing the extent to which blacks and Latinos are hardworking, etc., are taken directly from the General Social Survey (GSS), one of the finest survey instruments in the social sciences. Likewise, the gay rights questions on the survey were borrowed from the American National Election Survey (ANES), another social scientific jewel. In fact, I’d say that approximately 70% of the items on the Multi-State Survey of Race and Politics were recently on the ANES or GSS. Beyond the theoretical import of the items, I drew so heavily upon these surveys because I need proven items, ones that had been thoroughly vetted. I also wished to have items that facilitated comparisons to other, more national comparisons. Obviously, the tea party question is new.

This leads to the question of why I chose the states that are in the survey. Because Georgia, Michigan, Missouri, Nevada, North Carolina, and Ohio were battleground states going into the 2008 election cycle, Matt Barreto suggested that we should examine racial and political dynamics in those states. California, the seventh state, was chosen because we thought it wise to include a state in which the election was never in doubt. Thus, it provides a basis for comparison.

A final note on the methodology: The survey is drawn from a probability sample of 1006 cases, stratified by state. On average, it took 45 minutes to complete the survey; the survey had a 51% cooperation rate (COOP4). The study, conducted by the Center for Survey Research at the University of Washington, has a margin of error of plus or minus 3.1 percent, and was in the field February 8-March 15, 2010.

3. Some readers were puzzled that you reported data for those who either strongly approve or strongly disapprove of the tea party movement. Could you say what share of respondents fall into that category and how each differs from the remaining subset of white respondents who neither strongly approve nor disapprove?

I understand why some readers are curious about support for the tea party among whites who are not on either end of the distribution. Here’s what I have: Based upon 354 valid cases for this item (30% say they never heard of the tea party or have no opinion), 19% (N = 66) strongly disapprove of the tea party; 17% (N = 59) somewhat disapprove of it; 32% (N = 112) somewhat approve of the tea party; and 33% (N = 117) strongly approve of it. (Of course, when those that have never heard of the tea party (30%; N= 157) are included, increasing the number of observations to 511, the cell sizes change: 13% strongly disapprove; 12% somewhat disapprove of it; 22% somewhat approve; and 23% strongly approve of it.)

More details on how these categories differ on a range of relevant issues will follow in the coming days. The results will be available on the WISER website at the University of Washington.

4. You computed predicted probabilities as well, correct? Can you provide us with those results, and your interpretation and explanation of them?

I understand the skepticism associated with the simple cross-tabs that have been presented thus far. I don’t wish anyone to have the impression that I, or anyone else, would draw firm conclusions based on the bivariate relationship between support for the tea party and the items listed in the table. Hence, I’ve estimated a few models in which I control for, among other things, the effects of partisanship and ideology. The relationships hold.

I’ll draw upon three for illustrative purposes. For the first two models, the dependent variables are ordinal, so I report predicted probabilities. The dependent measure for the third model is an index, and is therefore continuous. For this, I estimate a simple regression model. Controlling for political ideology and party identification, support for the tea party (as it goes from its minimum to maximum value) results in a 23% increase in the likelihood that whites believe that “recent immigration levels will take jobs away from people already here.” Moreover, support for the tea party decreases support, by 22%, for gay or lesbian adoption. Support for the tea party also promotes racism. In this example, I draw on Kinder and Sanders’ (1996) work on racial resentment. I use the following four items to represent racial resentment: “Irish, Italians, and many other minorities overcame prejudice and worked their way up. Blacks should do the same without any special favors”; “Generations of slavery and discrimination have created conditions that make it difficult for blacks to work their way out of the lower class”; “Over the past few years blacks have gotten less than they deserve”; and “It’s really a matter of not trying hard enough; if blacks would only try harder, they could be just as well off as whites.” (alpha = .75) I use this instead of the stereotype items because it better captures the contours of more modern racism, one in which whites perceive blacks in violation of traditional American values.

In any case, racial resentment increases by approximately 25% as support for the tea party increases from its minimum to its maximum value. Again, each model controlled for possible confounds associated with partisanship and ideology. In sum, based upon this analysis, the data suggest that increasing support for the tea party is likely associated with xenophobia, homophobia, and racism.

5. Pulling it all together, what can we safely and confidently conclude about those who identify with the tea party movement and those who do not? Are their attitudes fundamentally different from other whites, from the American population as a whole, and if so, how so?

One way in which to view these preliminary results is that we should remain cautious, and not jump to firm conclusions. I say this, first, because the sampling frame I use differs from, say, recent polls conducted by Pew, Qunnipiac, the Washington Post, and USA Today/Gallup. Indeed, my results are relevant only to the states in which the survey was conducted, four of which (NV, MO, GA, and NC) voted for the Republican presidential candidate in at least seven of the last ten election cycles. Perhaps this is why my results appear at variance with national polls.

Another reason to proceed with caution is that I don’t have an item that directly measures tea party membership. Indeed, support for the tea party isn’t the same as accounting for group membership much less group identification, both if which tend to powerfully predict attitudes and behavior. With that in mind, it’s entirely possible that I’ve underestimated the effect of the tea party on political attitudes, and will likely do so in future analysis on its effect on political behavior.

Moreover, I make no claim that the data is representative of the country. Rather, they are representative of the states that were sampled. Appropriate weights, based upon the American Community Survey, have been constructed.

6. Are you aware of other polling out there that either confirms or disconfirms your analyses or any portions thereof?

At first glance, there are a few polls that may corroborate my findings. Recent polls conducted by Pew, Quinnipiac, the Washington Post, and USA/Gallup all ask questions relevant to the Tea Party movement. However, these surveys are absent questions that tap racism. Pew and Qunnipiac ask questions relevant to gay rights, but immigration isn’t present. So, I believed it possible for these to answer questions concerning relationship between gay rights and support for the tea party, but not cannot address the association between the tea party and racism or immigration.

Turning to more academic surveys, it’s possible to enlist the 2008 ANES, for instance, as a means of corroboration. But, for any items that bear upon race, immigration, or gay rights, it’s likely the case that too much has happened between now and then to make reliable predictions. I suspect, however, that when data is collected for the next ANES or GSS, it will then be possible to check my results.

(#As a matter of full disclosure, I had never communicated with nor even heard of Dr. Parker until I wrote about his poll. I do know his sometime-collaborator Matt Barreto; we have attended some political science conferences and/or sat on panels together. Other than my professional/personal acquaintance with Dr. Barreto–and he and I do not research or co-author together–I have no affiliation with either the University of Washington, WISER, Dr. Parker, or anyone else involved in the poll.)

Filed under , , ,

Comments Add Comment

Never miss the best of FiveThirtyEight.

Subscribe to the FiveThirtyEight Newsletter

Sign up for our newsletters to keep up with our favorite articles, charts and regressions. We have two on offer: The first is a curated digest of the best of what’s run on FiveThirtyEight that week. The other is Ctrl + , our weekly look at the best data journalism from around the web. Enter your email below, and we’ll be in touch.

By clicking subscribe, you agree to the FanBridge Privacy Policy

Powered by VIP