4/29/2008

On the Folly of Polling

One of the more interesting catchphrases I've been hearing in the news lately is the phrase "poll of polls," which takes the averages of various polls over a certain time period and creates a brand new average that is somehow supposed to be the most authoritative marker of where the race between the three remaining candidates stands. CNN and Real Clear Politics are regular practitioners of this so-called statistical "analysis." However, having studied a little statistics and research methods myself, I cannot believe these reportedly reputable media organizations are allowed to get away with this. There is such a wide variety of polls with various levels of credibility and bias that make averaging them into a consolidated barometer of public opinion a fool's errand. Here's why.

Point 1: Wording matters. Consider these two questions:

Question A: Whom do you support for President--John McCain or Barack Obama?

Question B: Whom do you support for President--Arizona Senator John McCain or Illinois Senator Barack Obama?


The names are in the same order, but extra information was included in Question B, thus potentially altering the responses. People who think Obama may be light on experience may look at the word "Senator" before his name and think more favorably of him. People may also look at the name "Senator" before John McCain's name and think negatively of him because he's running as a "maverick," not a Washington insider. Surely there are polls circulating that use both question formats. But unfortunately, the data are not interchangeable.

Point 2: Subtlety matters too. Look at these two questions:

Question A: Do you support Hillary Clinton or Barack Obama for President?

Question B: Do you support Barack Obama or Hillary Clinton for President?

Question C: Whom do you support to succeed George Bush--Hillary Clinton, John McCain, or Barack Obama?


Poll X may use the Question A format while Poll Y may use the Question B format. It may look only a subtle difference between the two questions, but in terms of statistical analysis and psychology, this makes a big difference. Some respondents are susceptible to the primacy effect in which items that appear first in a list are more salient. Others, however, are susceptible to the recency effect in which the last item in a list is the most salient. Some polls rotate these questions, but not all of them do, thus leading to potentially biased data.

Question C is an even more egregious example because by including the words "succeed George Bush," the question inadvertently "frames" the responses by providing extra context that could influence respondents' decisions before the response options are even presented. Voters who disapprove of George Bush may listen to "succeed George Bush" and be more inclined to choose a Democrat in the poll even though they are really more likely to support John McCain. Voters who approve of him may be more inclined to support the Republican even though they may have a greater affinity for one of the Democrats this time around. Or perhaps including the word "succeed" may conjure up the importance of leadership. Or personality. Or electibility. Why should polls that engage in such framing and polls that don't be treated as equals?

Point 3: Polls with smaller sample sizes should not be weighted as heavily as polls with larger sample sizes. In general, the larger a sample size is, the smaller the margin of error is. The only truly accurate poll would be one that asks everyone in the country about the presidential race. But that's obviously impossible, so polling organizations use samples to measure what a slice of the electorate is thinking. But if this "slice" consists of 400 people, why should it receive the same weight in the "poll of polls" average as a "slice" of 1200 people? That's bad statistics. That wouldn't pass muster in any university-level statistics course, so why should it pass muster with the media?

Point 4: A "registered voter" is different from a "likely voter." Some polls only measure the opinions of registered voters who may or may not vote in the election. Likely voters are more likely to participate in the election and are therefore more likely to be informed about the candidates. Other psychological variables may be at play also that set likely voters apart from registered voters. Are registered voters more likely to simply "guess" on a poll because they don't have any strong feelings either way about any candidate? Why should their responses even be compared with those of likely voters at all whose opinions are more likely to be informed? And what about unregistered voters who may decide to register and vote later on?

Point 5: Methodology matters. When are these polls being conducted? Who is being polled? Are pollsters questioning the person who answers the telephone? Are they questioning the head of the household? Who is more likely to answer the telephone at 10:30 in the morning? Who is more likely to answer the telephone at 8:30 in the evening? And how does calling during dinnertime affect people's responses? Surely these variables all have at least some subtle impact on how the results turn out. A poll with an overrepresented sampling of housewives might yield different results from a poll that oversamples single men. And what do pollsters do about voters who don't have landline phones? Many younger people only have cell phones, which make it harder for pollsters to reach them. Where do their opinions factor into the polling data? And is the pollster calling people using real people or robocalls? Are respondents speaking to pollsters directly or pressing 1 for McCain and 2 for Clinton?

Point 6: Timing matters. Why should a poll taken before a major political development be averaged with a poll taken after the development? Does a poll taken four days before a major event by one pollster have any relevance if there are polls taken two days afterwards by a different pollster with a different methodology? And what about polls that were taken one day before, the day of, and one day after this event? Some polls are snapshots that measure public opinion on just one particular day. Others gather data from a three-day period. How could these polls be given equal weight? This further dilutes the significance of the average that the "polls of polls" purport.

Point 7: Some people don't feel strongly about the remaining candidates. Consider this question:

Whom do you support for President? Hillary Clinton, John McCain, or Barack Obama?

If you don't support any of them, will the pollster treat you as "none of the above" or will they press you to choose whom you are most likely to support at present? If the latter is the case, that would presumably benefit the candidate with the highest name recognition, which automatically biases the results. And because different polling organizations use different methods of prompting respondents to choose a candidate even if they really have no preference, that further muddies the "poll of polls" results. Is a 47% level of support for Hillary Clinton really 47%? Or is it 42% because of the uncommitted leaners? Would you want to build your "poll of polls" average on such shaky data? And what do you do if you are a Libertarian or a Green?

It seems that people like dissecting polls so much because they are looking for any indication of trouble on the horizon for a particular candidate. They also provide more fodder for pundits to overanalyze and use to frame the next 24-hour news cycle. "The horserace" is fun for pundits, journalists, and political junkies everywhere, but given the overt flaws in the methodology of what they often obsess over (polls), they might be better off keeping their bombast in check because these polling averages and "polls of polls" simply don't hold water.

2 comment(s):

Brett said...

"Averaging polls"? It doesn't surprise me that CNN would do this, but Real Clear Politics? I'm disappointed; I had a professor last semester who actively runs a highly-respected polling firm in my home state (Dan Jones of Dan Jones & Associates), and I remember him telling that this kind of thing was bad mojo from a polling perspective. *

*He didn't use that phrase (he's way too old for that), but that's how I interpreted it.

On a sidenote, do you plan on doing another post pretty soon, particularly with Reverend Wright's rather poorly-timed (for Obamamaniacs)"response" out there?

Freadom said...

Another excellent post. I've attempted to tackle this subject before, but your post blows mine out of the water.