You can’t trust polls: Clinton leads, but our polling methods are bunk
Virtually all the opinion polls right now give Hillary Clinton a firm lead over Donald Trump in the race for the White House this November. The latest poll average compiled by RealClearPolitics puts her at least five points ahead of Trump, with or without Gary Johnson and Jill Stein included as choices; the Huffington Post Pollster projects a better than 99-percent chance that “Clinton is very likely leading;” and Nate Silver’s FiveThirtyEight gives her an 85 percent chance of winning.
It looks like the race is decided.
But that was the same sentiment that led the major pollsters 68 years ago to stop polling several weeks before the 1948 election. Tom Dewey’s lead was so firm and consistent that there was no way Harry Truman would be able to win. At least, so it seemed. The man from Missouri did win, of course, thrust to victory by a late surge of votes sparked by his whistle-stop campaign.
If, as Harold Wilson’s once said, a week in politics is an eternity, what do we call three weeks left until Election Day? More than ample time to squander a lead or to catch up.
Polling would never again be suspended weeks before Election Day ever again, no matter how big the lead in the polls. Yet this has not proved to be a guarantee that polls would get it right in the end.
In the most recent presidential election, Gallup’s final estimate of likely voters, issued on the eve of Election Day in 2012, projected Mitt Romney as the winner. A day later, Obama was the winner by nearly 4 percentage points.
Soon after this debacle the Gallup Organization, the inventor of scientific polling in the 1930’s, fell on its sword and quit the election horse-race business altogether. Rasmussen got it wrong in 2012 as well, but they opted to stay in the business.
But in all fairness to those polling giants, they weren’t alone. Three pollsters wimped out, projecting a tied vote, and of the four remaining polls on the RealClearPolitics summary, only two predicted an Obama victory, and those two did so with minimal confidence.
Despite avoiding the fatal mistake of 1948, has polling really gotten any better divining voters’ choices?
To start with something basic, opinion polls are really about “opinions,” not actions. At their best, they can tell us how people feel about political issues and personalities. Do voters, for instance, like or dislike candidates such as Hillary Clinton and Donald Trump?
Yet having an opinion and acting on it are two different things. Barely 6 in 10 voting-age American citizens turn out for presidential elections. Ascertaining the opinions of 100 citizens is just a start. Now you have to determine which 60 of them actually take the time to mark a ballot. They are the “likely voters.” They are the only ones that count. But to find them is no easy chore.
It is ingrained in all of us that voting is civic duty. So nearly all of us say, oh yes, I’ll vote, and then many will not follow through. Miscalculations of which respondents will turn out to vote can easily screw up a poll prediction, so would it be sure thing if we just dealt with actual voters?
Election-day exit polls quiz citizens who just turned out to vote, asking them to mark their vote one more time on a two-page questionnaire. This happens minutes after someone cast a vote, so there is no reason to worry that voters don’t remember how they voted; also, since they are filling out a form, voters need not directly express their decision to the poll-taker.
With a properly drawn random sample, such an exit poll should hit the result of the election like a hammer on the head of a nail. But that didn’t happen in the 2004 presidential election. At the end of a full day of exit-polling, the final tally of the national electorate showed Kerry beating Bush by 51 to 48. It was a safe lead, way beyond statistical shadows of doubt, given the large number of voters in the poll (over 10,000). But of course, Bush was re-elected that day.
What could possibly go wrong with a poll of actual voters, moments after they voted?
The reason for the misfire, it seems, was the refusal of some voters who had been selected for the exit poll to participate. More Bush voters demurred than did Kerry voters, skewing the poll result in Kerry’s favor. Failure to respond is another Achilles heel for pre-election polls. It is rampant in polls conducted over the telephone, still the most common means of getting respondents.
Few if any polling organizations report the response rate, that is, the percentage of respondents selected by some form of probability sampling who actually completed the interview. Participation can be as low as 1 in 10 because the vast majority of those who are called do not answer poll questions. Can pollsters just ignore those who don’t respond? Do people who decline to participate vote the same way as those who respond?
To assume so seems a risky bet — a bet Kerry lost in 2004.
So hold off on trusting poll-driven proclamations of a Clinton victory just yet. Voters have a way of always getting the last word.
Norpoth is professor of political science at Stony Brook University. He is co-author of ‘The American Voter Revisited’ and has published widely on topics of electoral behavior. His book, “Commander in Chief: Franklin Roosevelt and the American People,” is forthcoming. He can be reached at firstname.lastname@example.org.
The views expressed by contributors are their own and not the views of The Hill.