In the last week or so, an intense kerfuffle broke out over the poll-prognosticator Nate Silver and his blog at the New York Times, FiveThirtyEight. Silver, a statistician, has been predicting a decisive Obama victory for a very long time, based on his very complicated statistical model, which very, very few of his fans or detractors understand.
On any given day, Silver might have announced that -- given the new polling data -- "the model" was now finding that the president had an 86.3 percent chance of winning. Not 86.4 percent, you fools. Not 86.1 percent, you Philistines. But 86.3 percent, you lovers of reason.
Not surprisingly, for nervous Mitt Romney supporters, Silver's model has been a source of vexation. For nervous Obama supporters, he's been a constant reassurance. On her Twitter feed, Katha Pollitt, a columnist for the left-wing magazine The Nation, prodded Silver: "Why are you on a plane when you should be at yr desk updating 538 EVERY FIVE MINUTES?"
When Josh Jordan, a National Review colleague of mine, posted a data-heavy and entirely civil critique of some of Silver's projections, New York Times columnist Paul Krugman unleashed a diatribe denouncing Jordan and the National Review for what he saw as a kind of heresy.
"On the right, apparently, there is no such thing as an objective calculation. Everything must have a political motive," Krugman fumed. "This is really scary," he added. If "these people triumph, science -- or any kind of scholarship -- will become impossible."
Now, bear in mind that Jordan's critique centered on what Jordan (a numbers-cruncher himself) argues is Silver's over-reliance on small-state polls.
And on this rock the future of science -- nay, scholarship itself -- shall founder!
Now, I have no idea whether Silver's model is the psephological Rosetta Stone some hope -- or fear -- it to be. And no one else does either.
The truth is that any statistician can build a model. They do it all the time. They make assumptions about the electorate, assign weights to polls and economic indicators, etc., and then they wait for the sausage to come out. No doubt some models are better than others, and some models are simply better for a while and then regress to the mean. But ultimately, the numbers are dependent on the values you place on them. As the computer programmers like to say, garbage in, garbage out.