Robo-polls officially endorsed



The ecumenical product of a committee that included academics as well as public and private pollsters, this study is the best systematic analysis of what works and what doesn’t for pollsters since Irving Crespi’s useful but now almost-forgotten 1988 book for the Russell Sage Foundation, Pre-Election Polling: Sources of Accuracy and Error.

The report acknowledges that the widely discussed errors of pollsters in calling the New Hampshire Democratic primary provided impetus for the project. But thankfully, the analysis cast a wider net, looking at polling in both parties’ primaries in four states: New Hampshire, South Carolina, California and Wisconsin. The committee also looked at a wider range of polling organizations than any prior study of this type, notably treating the newcomer robo-polls that use IVR (Interactive Voice Recognition) as seriously as old-line interviewing operations like Gallup. The work of 21 pollsters in 35 contests was scrutinized.

The committee’s work was not always facilitated by the pollsters themselves. While CBS, Field, Gallup, SurveyUSA and a few others were kind enough to provide the study with micro datasets, interviewer variables and weighting cookbooks, some “outlaws” like Zogby, Research 2000 and Strategic Vision didn’t play nice and share all that was requested. Whether their non-cooperation stems from scarce resources or the desire to cloak “secret sauce” methods, the results are less complete because of their failure to be collegial.

The analysis the committee could undertake offers some surprising conclusions.

Regarding live interviewing versus IVR, the report says, “We found no evidence that one approach consistently outperformed the other — that is, the polls using CATI [live interviews] or IVR were about equally accurate.” Believe me: I never thought I’d live to see an AAPOR document admit that.

Regarding gaps in cell phone coverage by pollsters, a major area of concern, this study adds to the chorus of “What, me worry?” findings released previously. Says AAPOR, “We found no strong evidence that the gap influenced primary estimates in any meaningful way.” But the surrounding discussion makes clear that this observation extends to primaries and not necessarily to general elections, in which younger, cell phone-only voters are more likely to participate. The data on which this conclusion is based are also thinner than some of the others.

The researchers conclude that calling protocols are influential. Pollsters who make more calls to try and reach the respondents originally selected for sampling are more accurate. Clients need to hear this. Some unsophisticated poll consumers think a “good poll” is a quick one. I once told a client his interviews would take three nights. He asked, “What’s the problem — don’t you have enough interviewers to go ahead and knock this out?” I explained about callbacks and the biasing dangers of always substituting an easy-to- reach respondent for a hard-to-reach one. He looked at me like I was making that up to cover an interviewer shortage.

The report also raises some interesting ideas about the ordering of names on ballots, suggesting that pollsters’ practice of rotating the order of names may induce errors when the actual ballot is a fixed order. The general point is appropriate, but the discussion fails to appreciate that a verbal ballot differs from a printed one. Research suggests that the first position on the printed ballot confers some small advantage. But when a ballot is read out loud, what is the “first” position? There are studies suggesting both primacy (read first) and recency (read last) have effects in creating advantage.

Overall, this report makes a terrific contribution to what we know about the art and science of polling. Every candidate, committee and operative should read it carefully.

Hill is director of Hill Research Consultants, a Texas-based firm that has polled for GOP candidates and causes since 1988.