Trust methodologies, not pollsters

Congratulations to President Obama and winning senators and representatives. Victorious incumbents like Obama accomplished a historic feat, overcoming the miserable economy and wrong-track sentiment. Kudos are also in order for the “winning” pollsters, those research organizations that came closest to “predicting” the election outcomes. Several lists of the most accurate firms are out there — and I am troubled by the storylines that often accompany these lists.
The media coverage of poll accuracy typically glorifies firms and individual pollsters, not their methodologies. This is so dangerous. Just compare the 2008 accuracy lists with those from 2012 and you’ll see what I mean. Several of the most accurate pollsters in 2008 (like Gallup and Rasmussen) have tumbled down the rankings. Did they suddenly get dumb? I doubt it. In all likelihood, their methodologies changed or real-world circumstances transcended the efficacy of their methods. For example, Rasmussen’s focus on landline phones, acceptable in 2008, simply didn’t work when so many voters have switched to cells. It is also possible, a statistician would acknowledge, that the differences between 2008 and 2012 for a fallen pollster could simply be the result of a one-chance-in-20 that a poll has a larger margin of error than normal. Probability theory assures us that it works only 95 percent of the time, even with consistent methodology.

I have not really seen any accounting of how the more or less accurate firms handled relevant methodological choices. For example, take callbacks to voters who don’t answer at first call. If the pollster is in a hurry, he or she substitutes another voter for the one who didn’t pick up the initial ring. This can have a biasing effect, especially if the hard-to-reach voter has different views from the easy-to reach one. High-quality surveys will make multiple attempts to reach the originally chosen voter. But if a poll is being completed in a single 48-hour window, there are limited opportunities for callbacks. Few analyses of polling accuracy take into account length of the field period and methods for handling callbacks, but they should.
Because it’s just too complicated to assess all the ingredients that go into a fully cooked poll, it’s too convenient to look at the results and glorify the most accurate firms, even if their methods are suspect, raising the possibility that they were simply lucky. There is historical precedent. For years, the Literary Digest Polls were hailed for their accuracy. But the 1936 election finally revealed the weakness of their methods. Are there some Literary Digests at the top of today’s most-accurate-poll lists?
Because of the complexity of polling, and because most of us don’t really have the time or background to analyze the differences, we typically choose to pick a poll that suits our tastes, without knowing whether the pollster’s methods are sound. What suits our tastes? It might simply be that the poll taps our favorite candidate as the likely winner. Or the poll might have a snappy name that catches our fancy. Maybe we saw the pollster on TV and he or she seemed smart. Some polling consumers like the new kid on the block, others the older established names. Of course, none of these factors tells us much of anything about methodologies. The message should be clear: Be careful how much personal faith you put in any particular pollster. Your faith is better placed in that pollster’s methodologies.
Polling’s professional organization, AAPOR, is pushing for more transparency in polling methods. This would be a good thing. But some pollsters will resist, claiming they have some proprietary methods they don’t want to disclose lest they benefit competitors. Being secretive adds to mystery and celebrity. And if they revealed their true methods, we might see they were lucky, not great.

Hill is a pollster who has worked for Republican campaigns and causes since 1984.