Along with President Obama and many Democratic Senate candidates, the winners on election night included people who tried to predict the results by aggregating polling data -- most famously, Nate Silver of the New York Times. Their method was basically to average the recent polls, and use the result as a prediction. (Silver also did more sophisticated things, weighting the polls based on past accuracy, correcting for house effects, and running Monte Carlo simulations to calculate probabilities of victory.) Those who tried to predict the results by gauging enthusiasm at political rallies and trusting their gut feelings to adequately represent the preferences of the American electorate didn't do as well. If you'd like to catch up on the intense debate over poll aggregation as a way of predicting elections, I'll refer you to Brad DeLong's summary of 2 reasonable criticisms of Silver and 45 bad/ridiculous ones.
I put together this chart ranking 25 different predictions from people across the American political spectrum. (I think my Doktorvater would approve.) They're ranked based on the sum of the electoral vote margin in the states they got wrong, with the popular vote used to break ties.
# | Predictor | Outcome | Obama / Romney picks, all Obama wins | Pop=3.2% | StateFail | Grade |
1 | Markos Moulitsas and Daily Kos Elections | 332-206 | NV PA MN WI IA NH OH CO VA FL | 3.5% | 0 | A+ |
2 | Nate Silver, New York Times | 332-206 | NV PA MN WI IA NH OH CO VA FL | 2.5% | 0 | A+ |
3 | Simon Jackman, Huffington Post | 332-206 | NV PA MN WI IA NH OH CO VA FL | 1.7% | 0 | A |
4~ | Josh Putnam, Davidson College | 332-206 | NV PA MN WI IA NH OH CO VA FL | 0 | A | |
4~ | Drew Linzer, Emory University | 332-206 | NV PA MN WI IA NH OH CO VA FL | 0 | A | |
6 | Sam Wang, Princeton University | 303-235 | NV PA MN WI IA NH OH CO VA FL | 2.34% | 0.9 | A- |
7 | Jamelle Bouie, American Prospect | 303-235 | NV PA MN WI IA NH OH CO VA FL | 2.2% | 0.9 | A- |
8~ | TPM Polltracker | 303-235 | NV PA MN WI IA NH OH CO VA FL | 0.7% | 0.9 | A- |
8~ | RealClearPolitics | 303-235 | NV PA MN WI IA NH OH CO VA FL | 0.7% | 0.9 | A- |
10 | Intrade Prediction Market | 303-235 | NV PA MN WI IA NH OH CO VA FL | 0.9 | A- | |
11~ | Ezra Klein, Washington Post | 290-248 | NV PA MN WI IA NH OH CO VA FL | 3.9 | B | |
11~ | Larry Sabato, University of Virginia | 290-248 | NV PA MN WI IA NH OH CO VA FL | 3.9 | B | |
13 | Cokie Roberts, ABS NEWS | 294-234 | NV PA MN WI IA NH OH CO VA FL | 5.6 | B | |
14 | Dean Chambers, Unskewed Polls | 275-263 | NV PA MN WI IA NH OH CO VA FL | 1.79% | 10.5 | C+ |
15 | Erik Erickson, Redstate | 285-253 | NV PA MN WI IA NH OH CO VA FL | 17.2 | C | |
16 | SE Cupp, MSNBC | 270-268 | NV PA MN WI IA NH OH CO VA FL | 22.0 | C- | |
17 | Karl Rove, Bush advisor (popular) | 285-253 | NV PA MN WI IA NH OH CO VA FL | 3% | 23.9 | D+ |
18 | Ben Shapiro, National Review | 311-227 | NV PA MN WI IA NH OH CO VA FL | 28.0 | D | |
19 | Ben Domenech, The Transom | 278-260 | NV PA MN WI IA NH OH CO VA FL ME2 | R | 26.7+? | D |
20 | Christian Schneider, Milwaukee J-S | 291-247 | NV PA MN WI IA NH OH CO VA FL | 30.5 | D- | |
21 | James Pethokoukis, AEI | 301-227 | NV PA MN WI IA NH OH CO VA FL | 2% | 30.5 | D- |
22 | Michael Barone, Washington Examiner | 315-223 | NV PA MN WI IA NH OH CO VA FL | 33.8 | F | |
23 | George Will, Washington Post | 321-217 | NV PA MN WI IA NH OH CO VA FL | 35.7 | F | |
24 | Steve Forbes, Forbes Magazine | 321-217 | NV PA MN WI IA NH OH CO VA FL | 40.5 | F | |
25 | Dick Morris, Fox News | 325-213 | NV PA MN WI IA NH OH CO VA FL | 41.5 | F |
Most of the leaders are people who simply aggregated polling data, or who otherwise made their predictions mainly on the basis of polls. While Markos Moulitsas didn't operate a poll aggregator, he relied on them in his predictions, and his colleagues at Daily Kos Elections posted every poll that came out during the election. The next few (2-6) are all poll aggregators. Accuracy of prediction in general tracks the extent to which one based predictions on publicly available polling data. I'll thank my friends Kenny Easwaran, Nicholas Beaudrot, and Tony Vila for helping me arrive at the ranking methodology, and draw philosophically interesting conclusions from the results. I'm going to use the hackneyed 'winners and losers' format often chosen by people who are trying to write interesting things on the internet.
Winners
Psephology -- It's the discipline of predicting election results, and it has emerged as a successful special science. The term comes from the Greek 'psephos', meaning pebble, which the Greeks used in voting procedures to avoid the problem of hanging chads. Its methodology is one of aggregating the polls. Good special sciences generate predictions that can be characterized outside their own vocabulary, and it can tell you with a high degree of reliability which color various regions of a map will be on Nov. 7, and which human will be sleeping in the White House on January 20th of the next year.
Psychology -- When a special science wins, it's also a win for other special sciences that figure in its conceptual foundations. The methodology of psephology is grounded in psychology. When you ask people whom they intend to vote for, you're asking them to report their mental states, and these reports help us predict stuff in the way that reports of causally robust entities do. I like Jerry Fodor's view in the first chapter of Psychosemantics that intentional-state psychology is an excellent predictive science. Maybe it's also good for warm and fuzzy stuff like helping us appreciate each other as rational beings to whom norms apply, but its ability to support good psephology shows that it's not just that.
American Elections -- That we can predict election outcomes by aggregating poll results is a sign that election procedures aren't a total disaster. If elections and polls differ significantly, it could mean that something has gone wrong with the vote counting. American election procedures have serious problems, most notably Republican voter suppression efforts against minorities, and the horribly long time that it takes many voters in mostly-black precincts to vote. (I was an election observer in Detroit back in 2004, where many people in a nearly all-black precinct were waiting in lines for 3 hours to vote.) But democratic procedures seem to be at least minimally functional, as the vote counts more or less fit what you get when you ask people in the days before and add it up.
Numbers --
Losers
Wishful thinking -- The bottom eleven predictors, from Dean Chambers on down, are all Republicans who didn't rely in any significant way on poll aggregation. Given the way their predictions correlated with their preferences, it's hard to think that wishful thinking wasn't a big factor in how they predicted the results. An interesting difference between the predictors was the way they did their wishful thinking. Chambers, whose methodology involved the highly unreliable technique of 'unskewing' the polls to correct for a perceived undersampling of Republicans, was still in touch with enough data to pick one of the least improbable paths to Republican victory. SE Cupp is also an interesting figure here, as she picked Obama to win while getting states massively wrong. Maybe she made the right call based on the national popular numbers, so her problem wasn't wishful thinking, but inattention to state-by-state data? In any event, the people at the bottom on the chart were pretty deep in the wishful tank. Their predictions don't even make sense as the attempt to keep their supporters from despairing. A very narrow Romney win prediction would motivate better than an easy victory. While Michael Barone, George Will, Steve Forbes, and Dick Morris are on the A-list of celebrity political analysts, they're on the F-list as utterly incompetent analysts of measurable political events.
Republicans -- In principle, building a good poll aggregator before the election and relying on it is the sort of thing that could lead people of any partisan allegiance to the right answer. RealClearPolitics is the only major Republican-run aggregator I know of, and it did okay. But Republicans didn't trust its numbers enough to join it in accurately predicting defeat. Amazingly, this went up to the highest levels. As far as we can tell, the Romney campaign itself went into election night thinking that victory was likely. There are many different levels on which to understand this phenomenon, but one of the most telling is that by piling scorn on the sorts of people who carefully analyze data, they've managed to lose touch with data in general. A party that rejects smart data-analyzers so that it can maintain its interest groups' ill-grounded views on climate change, evolution, budgeting, and foreign policy won't be populated by the sorts of people who can figure out who will win the election, or who can determine the truth about other more important things.
Recent Comments