Monday, September 01, 2008

How Polls Work – And How They Do Not Work

In an earlier article, a reader noted that he was suspicious of CNN’s “Poll of Polls”. Frankly, he was right to do so. What CNN, Real Clear Politics, and other media outlets do, is to take poll releases from various polling groups in a certain time range, and aggregate them to find a sort of consensus. That is a very bad idea, however, if your intention is to accurately reflect the opinion of the general public. It’s a bit like saying that if you take everyone’s favorite version of spaghetti and mix them all together, you will get a really great-tasting batch of spaghetti. The odds are you will get a mess which won’t be worth the effort, and that’s what happens when you mix poll results. Polls, it should be noted, are the product of the groups and agencies which create them, and reflect a specific methodology which is usually similar to that used by other polling groups, but not exactly. That difference is why the two results cannot be mixed with confidence, and the more polls that are mixed, the less reliable would be the resulting report. Polls are best described as snapshots of opinion on one question at one specific point in time and place, not even movies much less reality. An example of this can be seen by looking at the most reliable of agencies, the Gallup Organization.

On August 24, Gallup showed Obama and McCain tied at 45% support each (with 10% undecided). But three days later, Gallup showed Obama with a six-point lead, 48% to 42%, and today Obama still leads by six points, 49% to 43%. The Gallup people seem to be claiming that Obama has increased his support while McCain has lost some. However, a look at the history of convention “bounces” would warn against making such an assumption. If history holds true, for example, McCain should enjoy a similar “bounce” after the GOP convention, yet this would not be a valid indicator of a stronger campaign, necessarily, but merely the effect of the focus from the convention. Even within the stable methodology of a well-respected poll, therefore, it is not valid to assume that changes in polling are necessarily indicators of a weaker or stronger election position.

With that said, it is important to consider other polling results. An August 30 release from Zogby, for instance, shows McCain at 47% support and Obama at 45%, at the same time that Gallup showed Obama ahead by six and holding steady support. How can two major polls say different things? Well, part of it is the fact that the variance between the polls is only eight points, which is equivalent to a four-point margin of error between them. If either poll had a respondent pool of 500 people or less, that disparity is within the standard deviation for such polling. That’s the mathematical point. What else needs to be considered, is that even where the same polling group takes polls, because of the randomness of the respondent pool, this means by definition that the people in different polls are not the same ones who answered the earlier poll, and even when conditions are the same and the same careful rules are applied, different people will probably produce different results. Also, as I have noted before, there is a clear bias in favor of or against the people in focus of polls by the polling groups. As a result, a composite of poll results cannot, by its character, produce an accurate or trustworthy result. At best, one of the polls might accurately reflect the opinion at that moment, but even if that is the case, mixing the results from all polls would only insure that the aggregate result was wrong to some degree.

Polls are not effective predictors of elections. The losing candidates of many past general and primary campaigns can point to any number of polls which indicated they would win. A candidate who trusts polls to indicate the precise amount and nature of their public support, is following the path of Tom Dewey. That is not to say, however, that polling is worthless. When a poll follows consistent methodology and asks consistent questions, and weights its demographics according to Census norms and the National Council on Public Polls standards (or the AAPOR), the results can be applied to illustrate trends and voter interest in key issues. Also, while the public is familiar with the polls which get published in their newspapers and favorite news sites, it should be noted that both the Obama and McCain campaigns have paid for private polling firms to identify target demographics and battleground states. Obama has spent almost 19 million dollars on private polling, while McCain has spent about 770 thousand dollars. It would fair to assume, I think, that these agencies are far more detailed in their questions and specific in their target demographics. So it may also be assumed from the amount of money spent, that there is significant confidence in the value of effective polling. The real question, as yet uncertain in its answer, is how to know which questions, which method, and which weighting is valid for the real condition.

No comments: