‘Robo-polls’ don’t lie

The Beltway hates change. Change implies a shift of power, and D.C. hates nothing more than surrendering power. 

And thus the town resists the advent of interactive voice response, or IVR, polling — the “robo-pollsters” that use automated phone calls and ask respondents to press buttons on their phones. Somehow, says the establishment, the fact that a computer is doing the job traditionally done by low-paid drones at call centers makes IVR polling inferior. Conventional wisdom is that polling validity is determined as much by the method of data collection as it is by the pollster’s filtering and analysis of that data.

That’s hogwash, encouraged by traditional pollsters who see their businesses threatened by lean and mean IVR upstarts. After all, IVR can save a client tens of thousands of dollars on a single poll. But empirical studies comparing the accuracy of traditional and IVR pollsters in recent cycles reveal that reputable robo-pollsters performed as well or better than their human-powered peers. 

In November 2010, statistician Nate Silver analyzed the polling results in the final three weeks of that year’s election cycle. The most accurate pollster for the cycle was Quinnipiac University (human), followed closely by SurveyUSA (robo), YouGov (Internet) and Public Policy Polling (robo, and the firm we use at Daily Kos). Significantly further behind were Mason-Dixon (human), Marist (human) and CNN/Opinion Research (human). 

Far behind them was the conservative Rasmussen Reports, which wasn’t just laughably biased in its polling, but spectacularly wrong. It’s Rasmussen that’s often cited by news outlets when I ask why they won’t run stories based on IVR polls. But dismissing all IVR polling because of Rasmussen makes as little sense as doubting all human polling because Research 2000 — the firm that defrauded Daily Kos in 2010 by fabricating at least some of its results — used human interviewers. 

Yet critics persist. Most network news outfits continue to ignore IVR polls, as do many newspapers. “In our ABC News polling standards we don’t regard autodialed, pre-recorded polls as valid and reliable survey research,” sniffed ABC polling director Gary Langer early last year. “Some other news shops — the few that, in my view, take coverage of public opinion seriously — share these judgments.” 

Langer went further, claiming that IVR polls had notched impressive predictions based on pure guesses: “For reliability and validity alike, what matters are sound methods. And what matters next are substantive measurements, not horse-race bingo.”

This year, Public Policy Polling was the only pollster to publicly release numbers in both recent special elections. Mid-cycle elections are the most difficult to poll — there’s too much ambiguity about turnout models to easily design the right voter screen. Well, in the New York 26th special election, PPP missed the final result by just 2 points. And in a poll of the California 36th special election for Daily Kos, PPP was off by just 1 percent. It would be illuminating to compare PPP’s numbers this cycle with other live-caller pollsters, like maybe ABC News’ outfit. Alas, ABC News didn’t poll those special elections, possibly because it would have cost the network too much to gas up its bloated polling machine. PPP went in the field, did its work swiftly and efficiently, and accurately predicted the results of both races.

Ignoring sound and accurate data does nothing to serve the audience of these legacy media outlets. Sadly for them, there are plenty of other news outlets that will give their audience the latest numbers.

Moulitsas is the publisher and founder of Daily Kos (dailykos.com)