It’s axiomatic: Producing accurate results requires a poll to survey the right people. Often this stricture is observed in the breach.
To illustrate the problem, I will violate a fundamental rule of my profession: that consultants never talk about their losses. Indeed, we seem to share a genetic defect that prevents us from even recalling such rare events — it’s the only way we can all maintain our 90 percent win records.
A few weeks ago, an outstanding client, Margaret Anderson Kelliher, lost the Democratic-Farmer-Labor Party nomination for governor of Minnesota. A dynamic campaigner, she won the endorsements of major newspapers, educators, labor organizations and the DFL itself. A new face on the statewide scene, she succumbed to former Sen. Mark Dayton, who outspent her by 3 to 1, while another opponent outspent her 4 to 1. Despite her severe financial disadvantage, Kelliher lost by a single percentage point.
How accurate were the polls? Ahem … A SurveyUSA poll five days out suggested Dayton held a huge 16-point lead, while the third candidate, Matt Entenza, who eventually captured just 18 percent of the vote, had 22 percent.
One might be tempted to chalk up the failure to IVR polling, but a few days earlier the distinguished Minneapolis Star Tribune poll, which uses live interviewers, gave Dayton 40 percent to Kelliher’s 30 — a 10-point margin that was also far off the final result.
I suggest the common flaw in these two polls was interviewing the wrong people. We have no way to estimate the implicit turnout for the SurveyUSA poll; however, IVR polls generally apply few controls on those they interview. The Star Tribune poll is more forthcoming with its data. That survey appears to have assumed a turnout of over 60 percent for a primary in which fewer than 15 percent actually cast ballots.
The Kelliher campaign, with its more limited resources, focused on likely voters with its extensive and effective phone, mail and field programs. In addition, the relatively few who voted were more likely to pay attention to the DFL and newspaper endorsements as well as to ads and information from the Kelliher campaign. In short, in a low-turnout affair like this, we would expect the few who vote to be quite different from the many who are polled.
Our own polling, which used registration-based-sampling techniques to simulate the likely electorate, proved spot-on, showing Kelliher with the same one-point deficit a week out that she recorded on Election Day (that’s the self-serving element).
So a poll that surveyed the “right” people proved accurate while polls of the “wrong” people proved wildly off the mark.
It is not the only time we have seen polls founder in low-turnout elections. Various surveys in the closing days of the Florida gubernatorial primary showed Bill McCollum leading by four points in a race he went on to lose by three (though one firm called the outcome correctly).
These failures were anticipated in studies conducted by Professors Alan Gerber and Don Green of Yale, who compared the accuracy of the traditional gold standard of sampling — random-digit-dialing — with registration-based sampling in the 2002 midterms. These general elections, of course, had higher turnout than primaries, but in six of the eight races they examined, registration-based sampling, which is more likely to survey real voters, proved more accurate.
The higher the turnout, the more likely the population surveyed in a random-digit sample is to resemble the actual electorate. But when turnouts are low, the differences can be magnified and polls that question the wrong people — non-voters — can well be faulty.
Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982. Current clients include the majority leaders of both the House and Senate.