By Mark Mellman - 06/17/14 08:31 PM EDT
I’m going to defend the indefensible and attempt to explain the inexplicable. I’m unwilling to throw up my hands and blame statistics or incompetence for the failure of polls to foresee Rep. Eric CantorEric CantorWis. Republican launches long-shot bid to oust Ryan Republicans who vow to never back Trump NRCC upgrades 11 'Young Guns' candidates MORE’s primary defeat.
Start, in Holmesian fashion, by eliminating the impossible. Those with a middle-school-level knowledge of statistics explain that one in 20 polls is just “wrong.” Never mind the origin of this foolish calculation; the simple fact is that the chances a poll would show Cantor (R-Va.) garnering 62 percent of the vote while he was “really” getting 44 percent are about one in a trillion — literally. Not quite Holmesian impossibility perhaps, but close, and certainly by far the least likely explanation for the divergence between Cantor’s poll and the final result. A pollster would have to work hard to miss by this much.
Turnout is another frequently cited villain in this drama. It was about 40 percent higher than in any previous GOP congressional primary in Virginia. Cantor’s pollster said he used the 2012 primary as a guide. In fairness, that wasn’t a foolish assumption. In fact, the 2012 primary turnout in this district was the highest the state had seen in any Republican House primary — until last week.
But if the poll was even close to “right” for past primary voters and higher GOP turnout was the culprit, every single one of those additional voters would have had to have cast their ballots for Brat. It would be the biggest difference in attitudes between more and less frequent voters in any election ever studied. That too, is extraordinarily unlikely.
If it is not statistical error, Democratic interference or higher GOP turnout, how do we explain the difference between the polls and the outcome?
Let’s entertain a possibility that seems improbable, but one I would argue is more likely than any of those so far considered: The poll was more or less correct. Perhaps it was off by 4 or 8 points, but not 18.
Thirteen and 14 days before the election, when Cantor’s poll was taken, the majority leader had something like 62 percent of the vote. Six days later, when another poll was taken by Vox Populi, Cantor’s support had shrunk by 10 points, to 52 percent. And then, in the eight days between that poll and the time the votes were counted, Cantor slipped more, to his final 44.5.
How plausible is this alternative based on what we know about primaries? Fairly. Without the anchor of party identification, primary horse races can shift much further and faster than those in general elections.
This is hardly news.
Professors Marcus Felson and Seymour Sudman warned us about it in a 1975 article that pointed out, for example, that a poll done eight days before the 1968 New Hampshire presidential primary overestimated Lyndon Johnson’s support by 35 points and underestimated Eugene McCarthy’s by 27. “Particularly ... in primaries” they wrote, “polls must be conducted until election eve if they are to be accurate ... positions can change rapidly in primary elections.”
More recently, Nate Silver assessed the performance of public polls during the 2012 GOP presidential primaries. “Polls ... conducted a week out have missed by about 10. And the whole period from about one week to two weeks before the primary has been a disaster, with an average miss of about 12-points.”
If there is a clear lesson here, it is not to avoid a particular pollster, but rather to never assume a lead will hold, especially in primaries. If you want to be accurate, keep polling.
Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982. Current clients include the Majority Leader of the Senate and the Democratic Whip in the House.