Beltway political seers spar with Berkeley professor

The best bookies give the appearance that they call it right even when, on occasion, they get it wrong, and Charlie Cook and Stuart Rothenberg, two of the most prominent political prognosticators in Washington, are no different.

Cook and Rothenberg are the go-to bookies for every flack, hack and wonk in Washington seeking to read the political tealeaves and figure out whether campaigns are flopping or soaring mid-stream.

While both missed the Republican wave in the House in 1994, they usually get it right, or if they’re wrong, then they’re close to being right. After all, if a race is going to be decided by a few hundred votes, who can tell how they’ll be split?

“How’d we do? Not bad. Not bad at all,” Rothenberg wrote in his Nov. 29 newsletter, the Rothenberg Political Report. He predicted that Democrats would gain six Senate seats, which they did, and 30 to 36 House seats (they won 29).  

But even legendary bookies are second-guessed. University of California-Berkeley political scientist Bruce Cain and his students at Berkeley’s D.C.-based program argue in a new study that the top political seers’ claims of accuracy are a bit inflated. 

The study has infuriated the small but elite community of election prognosticators — Cook, Rothenberg, University of Virginia political scientist Larry Sabato and the political team at Congressional Quarterly — who have cried foul, citing flawed methodology and a deep misunderstanding of their jobs. Cook even called Cain “clueless.”

Cain, who shared a summary of the study with The Hill, used a Nov. 2 cutoff date to get a uniform measure of accuracy at the final stage in the campaigns. The study reported that at that point, Rothenberg was more accurate than Cook (author of the Cook Political Report), Sabato, and CQ. Rothenberg predicted 14 of the 29 seats that ended up tilting in the Democrats’ favor, whereas Cook, Sabato and CQ predicted that five, 10 and eight seats, respectively, leaned in the Democrats’ favor.

Rothenberg was wrong in two cases, and he didn’t include two races that Democrats went on to win. CQ was wrong in nine seats, Cook in four seats, and Sabato in six seats.

Cain quantified accuracy by awarding one point for correctly predicting a seat that flipped and subtracting one point for wrongly predicting a seat would change. Rothenberg earned 10 points because he correctly predicted that 14 seats would flip and erred in only four races (14 minus four is 10). Cook earned just one point; Sabato earned four points; and CQ scored a minus-one. 

But it is Cain’s criticism that the forecasters inflate their claims of accuracy by not predicting races placed in the “toss up” category that created the biggest uproar. Cook put 20 of the 29 seats that flipped in the “pure toss up” category, where he offered no predictions. Rothenberg, Sabato and CQ placed 11, 13 and 12 seats, respectively, in a similar category and offered no predictions.

“[If] you are trying to predict the total number of seats that would swing from the Republicans to the Democrats, the flipped seat totals are the key statistic,” Cain wrote.

Cook and Rothenberg slammed the report.

“We don’t see it as our job — nor do our subscribers expect — to arbitrarily ‘push’ races into one column or the other when we have no real confidence behind that decision,” Cook wrote in an 11-paragraph e-mail sent to The Hill.

He continued, “As someone in our office said about this practice of condemning the ‘Toss Up’ rating, ‘do they think this is some kind of college drinking game or something?’ ”

Rothenberg argued that the study evaluates his work on something he does not do: predict the outcome of every race. Instead, he and Cook rate the competitiveness of a race.

“We try to provide information based on data, meeting the candidates, and reporting,” Rothenberg said. “We’re not trying to guess every race. [If] you get everything right, that’s chance. I’m not good enough to guess who’s going to win a 50- 50 race.”

Cook agreed, saying, “Our goal is to determine the competitiveness of races — not their outcome.”

CQ Politics Editor Bob Benenson said they do the best they can with what they have at the time.

“My main obligation in doing these ratings is when we say a race is safe we’re saying ignore this race,” he said. “If any of those flip, then we look stupid. We use all the guiding criteria that we can, add a little guesswork, and we came pretty darn close.”

CQ predicted that Rep. Anne Northup (R-Ky.) and Rep.-elect Brad Ellsworth (D-Ind.) would both win close races. But Northup lost by two points while Ellsworth won by 22 points.  

“We were more accurate in the [Northup] race than in the [Ellsworth] race,” Benenson said. 

Sabato also criticized Cain’s methodology, saying, “I don’t know how you do a study like this and ignore the overall predictions.”

Cain said that he and his students “respect” the political prognosticators they evaluated. “We only aim to compare as objectively as we can,” he said. “We regret that anyone finds our work threatening.”

A fuller report is due out next year.

In the weeks since the midterm election, several other political reporters have been explaining why the actual outcome differed from their predictions.

Ken Rudin, National Public Radio’s political editor and author of its “Political Junkie” blog, predicted that Democrats would pick up 24 House seats. But Rudin picked the wrong seats and admitted he goofed.

“But I underestimated the extent of the GOP collapse in the House; 23 (count ’em) incorrect calls out of 435!” he wrote in his blog.

Writing in Reason Magazine, Katherine Mangu-Ward apologized for putting her faith in futures-market predictions about the midterm outcome. On Nov. 6, the Iowa Electronic Markets accurately forecasted that Democrats would recapture the House. But traders did not believe that Republicans would lose the Senate; contracts that Republicans would hold the Senate were trading at a healthy 70 cents per share.

Predicting elections with mathematical models or using an array of polling, campaign finance, and demographic data is difficult and as much art as science. And in a sweep year when incumbents are washed out of office, it is even more difficult. 

“People still don’t understand what waves are because they only see them once a decade,” said Jennifer Duffy, who analyzes gubernatorial and senate races for the Cook Political Report. “Usually campaigns matter, but in a wave they don’t.”