Thursday, March 09, 2006

Polls And Politics

.

Well, John Zogby is nothing if not a firestarter. His latest imitation poll purports to claim that most American soldiers want to just call it quits and go home. As I wrote earlier, and as others have admirably demonstrated, that poll is so obviously flawed (if not an outright fraud) as to raise doubts about the accuracy of any opinion polling method. And so today’s column addresses the science of polling.

You may be familiar with basic probability. That is, if I flip a normal coin, there is a 50-50 chance between Heads and Tails. And you may have heard that if I flip a coin five times and it comes up ‘heads’ all five, that there is still only a 50% chance of it coming up heads or tails on the sixth flip. Of course, that’s also where human opinion comes in, because if a coin comes up ‘heads’ five times in a row, I’m going to become suspicious about the weighting of that coin. All things being equal, a coin should not come up 5 in a row one way, so if it does, I’m likely to start thinking that all things are not equal. And while that’s just my opinion, that opinion carries its own weight. The problem for Science is measuring opinion in a consistent and empirical manner. The only solution at hand is to review poll results in the context of the only analytical measure appropriate for such comparison; comparing election poll predictions to the actual results over a sustained period of time, and under comparable conditions and methodology.

It has always been human nature to want to know the future. The ancient Egyptians, and later the Greeks, were noted for pretty much making an industry out of omens and oracles. Nice gig, with certain restrictions. Fast forward to the 20th Century, and your average politico wants to be hip, to be happening, to be the guy with the know. That is, a lot of them want to know what they must and must not say when campaigning for office, or when giving speeches on a given issue. And of course the public is hungry to know what’s in fashion, so it was only natural that the major newspapers started giving attention to mentioning the Gallup poll and other big-time polls. And before long, polls started getting reported as news by themselves. When Walter Cronkite wanted to claim the Vietnam War was being lost, he not only said so as his own opinion, he pointed to carefully managed CBS polls which seemed to say the same thing. Throughout modern political history, officials and candidates have spun and sponsored polls to support their position and counter the opposition. Fortunately, these days there is also the New Media, to catch out frauds like Zogby and correct the bias in polls from other MSM outlets. While it can be annoying to have to cut apart polls one after another to show why they do or do not accurately reflect the national mood, the need to do so is greater than ever.

The first thing I would recommend any reader do when noticing a poll, is to ask why the poll is being done. That is, do you get the sense that the poll seeks to discover the existing mood of the nation, or does it seem to want to direct the mood in a certain direction? Many polls give away a bias in the way their headlines read; if the headline statement is not completely supported by the results of the actual poll, a bias exists. In some cases the claims made in the headline are actually contradicted by the details of the poll itself. For instance, whenever a poll cites a President’s “best” or “worst” approval ratings, watch out, because those polls do not often compare a given President to other Presidents. Also, Presidents are often given high or low approval ratings out of a sense of tension; that is, a poor President can enjoy good ratings simply by avoiding controversy and difficult decisions, while a good President may drop in the ratings because he takes on the tough issues.

Another trick in polls is the demographics. Blog readers are getting to be pretty sharp at noting unbalanced respondent pools; the 2004 election was pretty evenly split, with 37% each for Democrats and Republicans, with the remaining 26% self-identified Independents. Any poll weighting by party affiliation or identification ought to use the same balance in their demographics, but none of the major polling groups does this. Also, as we have seen, some deliberately overstate representation by minorities, by urban respondents, and by the under-30 adult, in order to present their findings in a Liberal-friendly light. I think this happens for three reasons. First, the United States has been leaning Liberal for a long time, and it’s taking the polling groups a while to understand that the Reagan Revolution was a paradigm shift in cultural focus. That is, it’s a permanent thing. Second, for whatever reason a large number of the polling groups are managed by Liberals. I think this has more to do with the urban-voter effect (remember that the major polling groups are based in large cities, which tend to vote more Liberal than the country as a whole), but in results it means that a pro-Liberal lean feels natural to these polling groups. The better groups are working on correcting this problem, but it’s still there to some extent. And third, it sells well to be Liberal in a poll. That’s because Conservatives, nominally, prefer facts to opinion, while Liberals have always been interested in how the wind is blowing.

But this hardly means that you cannot use a poll’s results to get a good sense of direction and opinion. For one thing, most polls, whatever their methodology, are consistent within its scope, the notable exception being Zogby. Also, even when a poll is biased, if it cites its internal data, like the CBS News/New York Times poll for example, then you have solid data by which you can actually reverse-engineer the poll and reconstruct the numbers using the hard data. The readers will remember that even when a poll had results I liked, if I could not see the internals, I could not call that poll credible.

Finally, you should also look at any poll with a critical eye, as to how it was constructed. In the case of the Zogby poll which claimed the U.S. Military wants to quit, for example, a number of red flags go up as soon as one tries to consider how Zogby went about conducting his poll. Polls can be scientifically accurate, but only when established procedures to confirm random response are consistently applied. To have a truly viable scientific sample, you need to interview at least a thousand respondents within the conditions needed, inside a three-to-five day period, at different times of the day, and in a variety of locations to avoid a demographic rut. In a national survey, that usually means RDD sampling with a margin for no-response/eligible contacts as well. It would mean avoiding concentrations in location or time or demographic categories. That’s a tall order, and frankly a lot of polling groups fudge a bit, accepting less than a statistically valid sample or accepting demographic imbalance and depending on an arbitrary weighting to “correct” the results. In the case of soldiers in Iraq, the task becomes unwieldy very fast. To begin with, contacting a thousand respondents is one thing, but a valid response in terms of the opinions of our fighting men means aiming for combat units, something Zogby clearly did not attempt, as evidenced by the demographic data released. Further, there is evidence that Zogby’s respondents were clustered in geographic locations, which further dilutes the viability of the results. In plain English, Zogby either deliberately ignored the soldiers whose opinion was the most relevant to the question of combat morale, or he mixed pools to get a flavor he wanted. He did not achieve anything like a truly random response, which by itself negates any claim to statistical accuracy. I won’t even go into the question phrasing, the order of the questions, or the use of a sub-contractor whose motives might be reasonably called into question. The poll simply failed to meet basic criteria for credibility, as is sadly common in Zogby polls these days.

Read the polls, take information from them, but don’t worship them.

2 comments:

Lisa said...

Zogby used to be a very reliable pollster. However, in recent years, he has played a lot of games with his polls, imo. I could be wrong.

http://madhatter.7.forumer.com/index.php

Anonymous said...

Thanks for your first rate analysis of Zogby's skewed poll. His results should come as no surprise to anyone that understands his politics.

This entry from Wikipedia suggests Zogby's results are driven by more than sampling:

"Before polls had even closed in the 2004 presidential election, Zogby predicted a comfortable win for John Kerry (311 electoral votes, versus 213 for Bush, with 14 too close to call), saying that 'Bush had this election lost a long time ago,' adding that voters wanted a change and would vote for 'any candidate who was not Bush.' While admitting that he was mistaken, Zogby did not admit any possible flaws in his poll methods, insisting that his predictions were all 'within the margin of error.' Meanwhile, opponents charged that his calling the race for Kerry while polling was still going on may have been a cynical attempt to depress the turnout."

This Wikipedia entry is pretty telling also:

"He describes himself as a liberal Democrat."

John Zogby's brother James, founder of the Arab-American Institute, is an associate of Al Gore. The Arab-American Institute web site contains this biographical info:

"...Zogby was elected a co-convener of the National Democratic Ethnic Coordinating Committee (NDECC), an umbrella organization of Democratic Party leaders of European and Mediterranean descent. On September 24, 1999, the NDECC elected Dr. James Zogby as its representative to the Democratic National Committee's Executive Committee. In 2005 he was appointed as chair of the DNC’s Resolutions Committee."

I wasn't surprised to read at Wikipedia that John sometimes employs Jim part-time for polling. The only thing that surprises me is that a Zogby poll has any credibility at all.