Data geeks vs. political pundits: Election 2012

One of the most interesting clashes of this election season doesn't involve candidates at all, but rather a new twist on the old battle between gut feelings and dispassionate analysis.

If you've been following the presidential election via TV and other mainstream journalism, you're likely hearing the race is very close -- perhaps even "tied." Typical is this headline last last week from the Washington Post's The Fix political blog: "WaPo-ABC tracking poll: Presidential contest as close as close can be." If you've been reading statistical blogs, though, word is that President Obama is very likely to win re-election with more than 300 electoral votes, far more than the 270 he needs.

That's quite a disconnect.

Whose view should you trust? Long-time political pundits who have access to campaign insiders and long histories of covering electoral twists and turns? Or statistical experts who dispassionately pore over reams of polling data, run millions of simulations and predict likely outcomes?

Some pundits went public with their disdain for data geeks this week, notably Republican commentator Joe Scarborough, who said on his "Morning Joe" MSNBC show: "Anybody that thinks that this race is anything but a tossup right now is such an ideologue, they should be kept away from typewriters, computers, laptops and microphones for the next 10 days, because they're jokes." That criticism was aimed at the best-known of the stat geeks, Nate Silver, whose FiveThirtyEight blog was predicting Obama had a 73.% chance of winning at the time (that rose to more than 86% today).

Silver responded that both his complex statistical model and "the simplest analysis of the polls would argue that Mr. Obama is winning. He's been ahead in the vast majority of polls in Iowa, Michigan, Minnesota, New Mexico, Nevada, Ohio, Pennsylvania and Wisconsin, and all the other states where the Democrat normally wins. These states add up to more than 270 electoral votes. It isn't complicated. To argue that Mr. Romney is ahead, or that the election is a 'tossup,' requires that you disbelieve the polls, or that you engage in some complicated interpretation of them." (He then also challenged Scarborough to a $1,000 wager, with the loser donating to the Red Cross).

Some others doing statistical analysis are even stronger that this race isn't close. Sam Wang at the Princeton Election Consortium this week pegged the probability of Obama's re-election at more than 98%. Wang has a great post comparing various statistical modeling of the election, and the pros and cons of each (including his own), for those who want to decide among data prognosticators.

Yet much of the media continue reporting the race as very tight, prompting a frustrated Paul Krugman to blog: "It's possible that the polls are systematically biased — and this bias has to encompass almost all the polls, since even Rasmussen is now showing Ohio tied. So Romney might yet win. But a knife-edge this really isn't, and any reporting suggesting that it is makes you stupider." Krugman, a Nobel Laureate in economics, teaches at Princeton University and also write opinion pieces for the N.Y. Times.

I see several possible reasons for the discrepancy between pundits and geeks:

* Pundits don't understand how to properly analyze poll data, too easily confusing as Silver might say signal and noise. "The answer to 90% of poll-related questions is 'it's probably random variance and you shouldn't make too much of it,'' Silver tweeted. Or, as ESPN's John Hollinger said on Twitter, "most amusing thing about this election is watching political pundits make sports fans look like PhD mathematicians."

* Whether consciously or subconsciously, many in the media have an interest in a close race. Who wants to keep checking the latest news if the race is pretty much decided?

* The geeks are missing intangibles such as whether there's an enthusiasm gap; in other words, will the supporters of both candidates actually get out and vote in the same numbers they claim they will when talking to pollsters?

Why there's current acrimony between the two sides is a lot easier to figure out. "In a word, because geeks make pundits look stupid," says a post at the progressive blog Blue Mass Group. "Being a pundit is an awesome job, because you get paid for shooting your mouth off, and you really don't have to have much to back it up." Not only will the analysts make the opiners look silly if the race isn't close, but those analysts -- much like new media bloggers -- threaten the pundits' privileged position as knowing what others do not. On the other side, there's little that aggravates people working on detailed statistical analysis than an argument based solely on "You're wrong because I think so."

Who's going to be correct after the votes are counted? I'm less concerned about gut intangibles than two factors no polls can take into account. 1) Are all who want to vote able to? Between new voter ID laws and reported early-voting lines in Florida of 6 hours or more, that's an important question. 2) Can votes can be counted accurately if a state's results are close?

Nevertheless, I'm going to side with the geeks on this one, even though I'm well aware that a low-probability outcome can still occur in a sample size of one. Politics aside, I'd like to see a win for rational analysis.

Copyright © 2012 IDG Communications, Inc.

Shop Tech Products at Amazon