Tarnished Silver: Assessing the new king of stats

Nate Silver’s attackers don’t know what they’re talking about. (Nor do his defenders)

by Colby Cosh

The whole world is suddenly talking about election pundit Nate Silver, and as a longtime heckler of Silver I find myself at a bit of a loss. These days, Silver is saying all the right things about statistical methodology and epistemological humility; he has written what looks like a very solid popular book about statistical forecasting; he has copped to being somewhat uncomfortable with his status as an all-seeing political guru, which tends to defuse efforts to make a nickname like “Mr. Overrated” stick; and he has, by challenging a blowhard to a cash bet, also damaged one of my major criticisms of his probabilistic presidential-election forecasts. That last move even earned Silver some prissy, ill-founded criticism from the public editor of the New York Times, which could hardly be better calculated to make me appreciate the man more.

The situation is that many of Nate Silver’s attackers don’t really know what the hell they are talking about. Unfortunately, this gives them something in common with many of Nate Silver’s defenders, who greet any objection to his standing or methods with cries of “Are you against SCIENCE? Are you against MAAATH?” If science and math are things you do appreciate and favour, I would ask you to resist the temptation to embody them in some particular person. Silver has had more than enough embarrassing faceplants in his life as an analyst that this should be obvious.

But, then, the defence proffered by the Silverbacks is generally a bit circular: if you challenge Silver’s method they shout about his record, and if you challenge his record they fall back on “Science is always provisional! It proceeds by guesswork and trial-and-error!” The result is that it doesn’t matter how far or how often wrong Silver has actually been—or whether he adds any meaningful information to the public stockpile when he does get things right. He can’t possibly lose any argument, because his heart appears to be in the right place and he talks a good game.

Both those things count. Silver is a terrific advocate for statistical literacy. But it is curious how often he seems to have failed upward almost inadvertently. Even this magazine’s coverage of Silver mentions the means by which he first gained public notice: his ostensibly successful background as a forecaster for the Baseball Prospectus website and publishing house.

Silver built a system for projecting future player performance called PECOTA—a glutinous mass of Excel formulas that claimed to offer the best possible guess as to how, say, Adam Dunn will hit next year. PECOTA, whose contents were proprietary and secret and which was a major selling point for BPro, quickly became an industry standard for bettors and fantasy-baseball players because of its claimed empirical basis. Unlike other projection systems, it would specifically compare Adam Dunn (and every other player) to similar players in the past who had been at the same age and had roughly the same statistical profile.

For most players in most years, Silver’s PECOTA worked pretty well. But the world of baseball research, like the world of political psephology, does have its cranky internet termites. They pointed out that PECOTA seemed to blunder when presented with unique players who lack historical comparators, particularly singles-hitting Japanese weirdo Ichiro Suzuki. More importantly, PECOTA produced reasonable predictions, but they were only marginally better than those generated by extremely simple models anyone could build. The baseball analyst known as “Tom Tango” (a mystery man I once profiled for Maclean’s, if you can call it a profile) created a baseline for projection systems that he named the “Marcels” after the monkey on the TV show Friends—the idea being that you must beat the Marcels, year-in and year-out, to prove you actually know more than a monkey. PECOTA didn’t offer much of an upgrade on the Marcels—sometimes none at all.

PECOTA came under added scrutiny in 2009, when it offered an outrageously high forecast—one that was derided immediately, even as people waited in fear and curiosity to see if it would pan out—for Baltimore Orioles rookie catcher Matt Wieters. Wieters did have a decent first year, but he has not, as PECOTA implied he would, rolled over the American League like the Kwantung Army sweeping Manchuria. By the time of the Wieters Affair, Silver had departed Baseball Prospectus for psephological godhood, ultimately leaving his proprietary model behind in the hands of a friendly skeptic, Colin Wyers, who was hired by BPro. In a series of 2010 posts by Wyers and others called “Reintroducing PECOTA”—though it could reasonably have been entitled “Why We Have To Bulldoze This Pigsty And Rebuild It From Scratch”—one can read between the lines. Or, hell, just read the lines.

Behind the scenes, the PECOTA process has always been like Von Hayes: large, complex, and full of creaky interactions and pinch points… The numbers crunching for PECOTA ended up taking weeks upon weeks every year, making for a frustrating delay for both authors of the Baseball Prospectus annual and fantasy baseball players nationwide. Bottlenecks where an individual was working furiously on one part of the process while everyone else was stuck waiting for them were not uncommon. To make matters worse, we were dealing with multiple sets of numbers.

…Like a Bizarro-world subway system where texting while drunk is mandatory for on-duty drivers, there were many possible points of derailment, and diagnosing problems across a set of busy people in different time zones often took longer than it should have. But we plowed along with the system with few changes despite its obvious drawbacks; Nate knew the ins and outs of it, in the end it produced results, and rebuilding the thing sensibly would be a huge undertaking. We knew that we weren’t adequately prepared in the event that Nate got hit by a bus, but such is the plight of the small partnership.

…As the season progressed, we had some of our top men—not in the Raiders of the Lost Ark meaning of the term—look at the spreadsheet to see how we could wring the intellectual property out of it and chuck what was left. But in addition to the copious lack of documentation, the measurables from the latest version of the spreadsheet I’ve got include nice round numbers like 26 worksheets, 532 variables, and a 103 MB file size. The file takes two and a half minutes to open on this computer, a fairly modern laptop. The file takes 30 seconds to close on this computer. …We’ve continued to push out PECOTA updates throughout the 2010 season, but we haven’t been happy with their presentation or documentation, and it’s become clear to everyone that it’s time to fix the problem once and for all.

For the record, the Wieters Bug turned out to be a problem highly specific to Wieters; in Silver’s “copiously undocumented” rat’s nest of a model, there was a blip in the coefficients for the two different minor leagues in which Wieters had played in 2008, and BPro did not have time to ransack the spreadsheets looking for the possible error. The Ichiro Problem, by contrast, is intractable by ordinary statistical means; there are just a few players who are so unusual that a forecaster is as well off, or better off, falling back on intuition and first-principles reasoning. (Unless, that is, he has better data. Today’s PECOTA is able to break batting average into finer-grained statistical components in the hope of detecting Ichiros more perceptively.)

If the history of Silver’s PECOTA is new to you, and you’re shocked by brutal phrases like “wring the intellectual property out of it and chuck what was left”, you should now have the sense to look slightly askance at the New PECOTA, i.e., Silver’s presidential-election model. When it comes to prestige, it stands about where PECOTA was in 2006. Like PECOTA, it has a plethora of vulnerable moving parts. Like PECOTA, it is proprietary and irreproducible. That last feature makes it unwise to use Silver’s model as a straw stand-in for “science”, as if the model had been fully specified in a peer-reviewed journal.

Silver has said a lot about the model’s theoretical underpinnings, and what he has said is all ostensibly convincing. The polling numbers he uses as inputs are available for scrutiny, if (but only if) you’re on his list of pollsters. The weights he assigns to various polling firms, and the generating model for those weights, are public. But that still leaves most of the model somewhat obscure, and without a long series of tests—i.e., U.S. elections—we don’t really know that Nate is not pulling the numbers out of the mathematical equivalent of a goat’s bum.

Unfortunately, the most useful practical tests must necessarily come by means of structurally unusual presidential elections. The one scheduled for Tuesday won’t tell us much, since Silver gives both major-party candidates a reasonable chance of victory and there is no Ross Perot-type third-party gunslinger or other foreseeable anomaly to put desirable stress on his model. Silver defended his probabilistic estimate of the horserace this week by pointing out that other estimates, some based on simpler models and some based on betting markets, largely agree with his.

This is true, and it leaves us with only the question of what information Silver’s model may actually be adding to the field of alternatives. The answer could conceivably be “Less than none”, if his model (or his style of model-building) is inherently prone to getting the easy calls right and blowing up completely in the face of more difficult ones. (Taraji P. Henson Alert!) It is worth pointing out that a couple of statisticians have given us a potential presidential equivalent of the Marcels—a super-simple model that nailed the electoral vote the last two times (and that actually is fully specified).

It is also worth pointing out that Silver built a forecasting model for the 2010 UK election, which did turn out to be structurally unusual because of the strong Lib Dem/Nick Clegg performance. Silver got into squabbles with British analysts whose models were too simple for his liking, and the whole affair was an exemplar of what Silver’s biggest fans imagine his role to be: the empiricist hard man, crashing in on the pseophological old boys’ club and delivering two-fisted blasts of rugged science. It did not go well in the end, as his site’s liveblog of the returns records:

10:00 PM (BST). BBC exit poll predicts Conservatives 307, Labour 255, LibDems 59.

10:01 PM (BST). That would actually be a DROP for Lib Dems from the last election.

10:02 PM (BST). BBC nerd says: “The exit polls are based on uniform behavior”, a.k.a. uniform swing. So we haven’t really learned anything about whether uniform swing is the right approach; it’s baked into the projection.

10:07 PM (BST). We would obviously project a more favorable result than just 307 seats for Conservatives on those numbers. Calculating now.

10:11 PM (BST). If the exit polls are right but the seat projections are based on uniform swing, we would show a Conservative majority on those numbers.

10:13 PM (BST). Here is what our model would project… [Cons 341, Lab 219, Lib Dem 62]

The final result? Conservatives 306, Labour 258, Liberal Democrats 57. The BBC’s projection from exit polls, using simple uniform-swing assumptions to forecast the outcome of a very wrinkly three-sided race, was so accurate as to be almost suspicious. And how was Silver’s performance after being basically given the national vote shares for the parties? Perhaps it’s best to draw the veil of charity over that.

Which, in fact, seems to be what has happened. Lucky thing for Silver’s reputation!—but then, he has always been lucky.

Tarnished Silver: Assessing the new king of stats

  1. Appreciate the hard work into documenting and linking all this. I think both sides would do well to understand that mathematical models are not pure math, and thereby not pure science. They admit of degree, and because of this, it’s foolish to defend ANY model as if it has the rigor of geometry or calculus.

    • Sure. I agree a 100%. But then again, consider what the other option is: pundits with gut feelings. Consider also the success stats have enjoyed in sports — and not for nothing.

      Simulation models are not “hard science”, but frankly Silver himself says that so many times in his blog and in his book. I agree that people who want yes-or-no, perfect-or-crap binary decisions are being stupid if they look at the model like this. I wished people could discuss things in more nuanced ways. What models like Silver’s are, in fact (IMHO) is an improvement over the old “pundit’s gut feeling” forecasting paradigm.

    • math is not pure. See Goedel’s Theorem

    • Where has Silver asserted that what he does is pure science?

  2. This is a much better criticism than what one usually sees against Silver in the press. I am glad to see a critic who at least did do his homework before criticizing someone — hey, can I hope that future critics of Silver (or, for that matter, of anyone else) also take the time to do the kind of research you did? It certainly would improve the discussion.

    Now, basically, here is my beef with what you’re saying: it’s the old criticism of any statistical model, namely, that it isn’t perfect. Anyone with some knowledge of math and statistics and who nevertheless expects Silver’s (or anyone else’s, for that matter) mathematical models to be perfect is not thinking correctly.

    As far as I can see, Silver’s is the best mathematical model available, given its records. PECOTA was, as you point out, quite successful; it wasn’t perfect, but it was quite succesful, and the ways in which it wasn’t perfect can be improved. Now, even if it had been a complete failure — one learns from one’s failures, and the next model (for political forecasting) can be better. It stands or falls with its results, not with the results of its predecessors.

    Nobody, and no mathematical model, is an oracle. But Silver’s arguments in favor of his model are quite good: it basically agrees with other models (all say Obama is the favorite), and it basically already takes into account all factors that its critics have mentioned so far. This doesn’t guarantee anything, of course, but it’s also not nothing.

    Statistics have had a lot of success in sports, and for a reason: they actually do convey relevant information. People have learned to stop thinking of statistics in sports as oracle predictions and to start seeing them as descriptions of what may or may not happen, and with what degree of likelihood. Why not do the same for political science?

    The underlying problem is not Silver’s model (which, like all models, probably can be improved). The underlying problem is that people who prefer to trust their guts (note how hard it has been for them to even learn to live and derive some useful information from polls) don’t like the stats people.

    By all means criticize whatever you think is (statistically) wrong with Nate’s model. But don’t forget that many of those who will applaud such criticism are not at all following your level of research — they simply don’t like the idea that there may be something better than one’s guts to turn to when trying to understand what is going on in a disputed election.

    • I enjoy Nate’s work but was PECOTA really much better than Marcel which is essentially a vanilla regression? I don’t think so…certainly not better enough to pay for IMO. Nate’s electoral model makes sense however and so long as the data is truly representative of each of the states’ populations then I’d expect his prognostication to be successful.

  3. Or, if I may summarize the above in one simple reaction: most of the anti-Silver criticism has little or nothing to do with possible problems in his model, and more to do with not liking Silver, for whatever reason. It’s not that people want to see better forecasting models; it’s that people want to topple Silver. And that is, to me, rather sad. Even if Silver turns out to be ‘topplable’, even if he turns out to have grossly misdesigned his statistical model of this election.

    • If he’d disclose his methods it would be easier to have a more substantive discussion.

  4. It’s one thing to critique, it’s another to do. Nate Silver might not be accurate, but I don’t see any of his critics coming out with another model. That’s because they are either too lazy,too ignorant, or know he’s right but don’t want to admit it because it would hurt their ad revenue.

    • Nate Silver gave the GOP 1-in-3 chances of winning 60 seats in 2010… and 1-in-6 chances of winning 70 seats… they won 67 seats. Why should we rely on his modelling?

      • You do not seem to understand the concept of probability at all (i.e. it is something different than a prediction).

        • So we shouldn’t consider his track record?

          • You can compare track records of predictions. You should not compare a prediction to a given probability.

            Let’s say you have a baseballer with a 0.250 hit rate. Simplified, if you were to make a prediction that either one or the other result will happen in the next at-bat it will be wrong with a probability of 75% or 25%.

            If you roll a dice the probability to roll any side is 1-in-6. Saying there is a 5-in-6 chance that any specific side will not be rolled the next time and then that side gets rolled doesn’t make anyone a bad predictor, because there never was a prediction.

            Please read up on the difference, start e.g. here:

            http://en.wikipedia.org/wiki/Probability

        • As I recall, Silver pegged the GOP picking up 53 seats in 2010. Considering that unlike the Pres. election, there was not a lot of polling done at the House district level, as well as the closeness of many of the seats that flipped GOP, his model worked pretty well at establishing the probabilities.

      • It was a 63 seat pickup, not 67. Silver’s median prediction was 54-55 seats with the realistic probabilities for more cited above. Forecasters are often reticent to predict extremes. 54 seats equaled the largest single-year pickup since 1948 and 55 is one more than that.

        So, your main critique is that he predicted the largest landslide in over 60 years and his median underestimated that extreme by 8.5 seats? That’s not much of a critique.

  5. Prob with his simulations is the underlying assumptions in the input models. The state polls cannot be as “pure” as baseball statistics.

    What does an economist do if his ship sinks? He assumes he’s got a life jacket.

    • This shows that you do not understand statistics and the models that Nate uses. His model takes into account that state polls are not “pure.” His analysis looks at the polls and calculates what would happen if they were wrong. He looks at each state poll and runs scenarios consider if each state poll is wrong in both directions. After looking at all of these hundreds of different possible combinations he can see how many times one candidate wins and how many times another candidate wins, this is how he calculates what percentage of a chance each candidate has of winning.

  6. Now is not the time for Nate Silver, with this terrible economy. If he turns out to be correct, it could throw a hundred people out of work whose only skill is going on TV and talking about “what the American people want.”

    • “it could throw a hundred people out of work …”
      What about the other 2900? They get to keep their jobs? ;-)

  7. Most of the criticism of Silver is from GOPpers who want him to be wrong.

    • A few people have pointed this out, and I’m not clear why. 1 in 6 is not zero. A person would have to examine his record as a whole to see how often events he have low probability to still happened. The problem with probabalistic calls is it’s difficult to be proven wrong, per se.

      • To be proven wrong one needs to have done a large number of predictions. A large sample is required. Then you can compare.

        But Silver was given a lot of credit based on a small sample. In fact, you can even look at that small sample and spot a number of low probability events happening, such as his 2010 US elections and his UK election predictions. So far, the numbers suggest he is not accurate.

        When we finally have a large sample to evaluate, we will see, but on the results so far, I am skeptical he is any better than most predictors.

      • Nobody insists that Silver is either 100% right or 100% wrong. They just assign a probability to his knowing what he is doing, or being no better than a monkey jumping on a typewriter — just the way he assigns probabilities to outcomes.

        If in even one single case, Silver assigns a low probability to an event that actually happens, then one can quite logically assign a significantly higher probability to his being no better than the monkey. And that, in essence, is all people are really saying.

    • It’s worth noting that house projections are an inherently different, and arguably much more difficult undertaking than presidential projections. Polling is less frequent and robust for individual congressional districts than it is for competitive states, so there is likely higher variability in the projections.

  8. Forecasting, in any endeavor comes in high on the failure ratings. Computer models are just not good at doing random. A computer can’t guess how a voter is going to vote, what products a consumer is likely to buy, the weather or the economy. It has to rely on past history, trends and current inputs. If any of these data sets are missing a vital piece of information, the model is useless. Why people continue to put faith in computer models, I really don’t know.

  9. Presumably, Silver’s models are only as good as info he puts into them and Republicans are going nuts because many polls are using sketchy assumptions about turnout, voter enthusiasm. Normally I ignore partisans freaking out about polls but the criticism this year seems different. There are lots of complaints this year that some pollsters are trying to game the system for Obama. Republicans have all the enthusiasm while polls claim Dems are going to turnout in bigger numbers than 2008, which seems unlikely. President vote going to be very interesting indeed if Repubs turnout in greater numbers than pollsters are predicting.

    The Corner Nov 3:
    In Ohio, Marist/NBC/WSJ finds Obama up by six, 51–45, unchanged from just after the first debate. The Democratic party-ID advantage in this poll was nine points. Nine. By my calculations, Democrats held a five-point advantage in party ID in the wave election of 2008, following a five-point advantage for Republicans in 2004. Essentially Marist is finding that Democrats are not only going to match their turnout advantage from 2008, but they are actually going to almost double it.

    • I fin that “turnout sample” cry to be overblown.. whoever answwrs the phone answers the phone and how they happen to be voting is how they happen to be voting. Do you know of any pollsters in Canada that try to match their polls to past voter turnout, or try to adjust it to past turnout by ID’d supporters?

      • I have never gotten a clear answer about this and it seems to be the million dollar question, if one is going to argue with state polls. Pollsters aren’t supposed to weight or adjust party ID, as I understand it. However, they do adjust for demographic factors that indirectly change party ID. Also, the likely voter screen on many of these state polls is not working well. We know that because they often have 85-90% of respondents making it through the screen and counted as likely. We know that we won’t see 90% turnout. Never do, never will.

        • Generally they don’t weight for party ID, they take that as a measureable, just like the choice of candidate. They weight by sex, age, ethnicity, and region — but while those do affect voting tendencies, they are hard numbers and can’t really be faked.

          There are two places it gets tricky. The first, which you’ve put your finger on, is the likely voter screen. You let 80% of your registered-voter respondents through, and you are guaranteed to have an abnormally large number of Democrats, because people who tend not to vote — young, very old, unemployed, criminal record, illegal alien — are very likely to vote Democrat if they vote at all. Now if you’re a small operation doing a poll in a swing state without a lot of financial backing, the pressure to loosen up your likely voter screen is enormous, because otherwise your poll is going to cost a lot more money, since you have to dial a lot more numbers to get your final likely voter tally up.

          The second place is a new effect, and we don’t yet know how to deal with it at all. That’s the fact that response rate to polls has gone off a cliff. Less than 1 of every 10 persons reached agrees to complete the poll. If you’re a pollster, you’d love to think that whether you agree to answer the poll or not is COMPLETELY UNCORRELATED with your political tendencies. The rest of us can be forgiven for thinking this is very, very unlikely. We can imagine all kinds of theories about which way it would go — personally, my favorite is that people for whom politics is a religion and voting an act of “revenge” (to borrow the President’s phraseology here), i.e. Democrats, are more likely to enthusiastically participate in polls. People who already have a religion, and who consider elections a dangerous nuisance, a time to be on guard less some weasel sneak into power and meddle in our lives, i.e. Republicans, are more likely to be needing to get dinner on the table or wash the linens when a pollster calls.

          We may note, parenthetically, that exit polls are almost always more Democrat than final results. Since we’re surveying people who have ACTUALLY JUST VOTED — so there is no question at all about them being “likely” voters — the only way this can be true is if those who have just voted Democrat are more eager to share that information with a pollster than are people who’ve just voted Republican. (There is possible an additional effect here, in that media and polling operators — who tend to be young people who’d consider a temp job calling strangers — are overwhelmingly Democrats. Nice people might well hesitate to tell them something that will be disappointing, so if they’ve just voted Republican, they may consider it a kindness to decline to talk.)

          Be that as it may, God’s own truth is that we don’t know how to correct for the nonresponse rate. We know these things are important: there are very famous cases of a skewed response rate leading to laughable predictions, such as the famous Literary Digest poll of 1936 that predicted a broad Alf Landon win.

          • Really, really good comment. Thanks for posting it.

    • Party id is nothing more than the voters state of mind when they are polled. It is not their registration status. There is good reason to believe that in this cycle, a good chunk of registered republicans–tea partiers– are identifying as independents. So are voters on the right really being undersampled, as you say, or are they just identifying themselves differently?

      Continuing this line of thinking, there has been a big shift in the voting preferences of self-identifying independents, from widely favoring Obama to now favoring Romney. The size of this shift stretches credibility unless the composition of that self-identifying independent group has changed in some fundamental way. I speculate that it has. Again, I suspect that the tea-partiers who used to identify as republicans now identify as independents when polled.

      This would explain why it *appears* that the polling samples favor dems even though pollsters are not sampling or weighting according to party id.

      But hey, we’ll know in a couple of days if there’s a problem with pollsters’ sampling.

      • Nice try, won’t work. The problem is not that all the Rs have converted to Is, it’s that pollsters routinely predict that the number of Democrats turning out in 2012 will be even higher than it was in 2008. That is, frankly, nonsense on stilts, as anyone with an ounce of common sense can see.

        • Pollsters do not sample or weight by party id, so they are not “predicting” anything. That’s just how the self-identification of party status falls when you sample according to demographics.

          Moreover, whether you want to dismiss it or not, the truth is that party affiliation is fluid, and changes according to how much people like their candidate, trends such as the tea party movement, and so on. Individuals who don’t follow politics closely are especially prone to shifting their responses on party affiliation question. But hey, don’t take my word for it. Here’s a whole article on it from Gallup:

          http://pollingmatters.gallup.com/2012/09/the-recurring-and-misleading-focus-on.html

          Gallup, who has leaned more republican than any other pollster this cycle, comes right out and says that looking at party identification “is a faulty approach to evaluating a poll’s results. It is an attitudinal variable, not a stable population parameter.” Their words.

  10. I obviously read American right wing sites and many of them are hyping this model created by two profs. Election is going to be dueling models. Silver gets way more hosannas from left, and opprobrium from right, solely because he works for NY Times.

    University Of Colorado:

    An update to an election forecasting model announced by two University of Colorado professors in August continues to project that Mitt Romney will win the 2012 presidential election.

    According to their updated analysis, Romney is projected to receive 330 of the total 538 Electoral College votes. President Barack Obama is expected to receive 208 votes — down five votes from their initial prediction — and short of the 270 needed to win.

    The new forecast by political science professors Kenneth Bickers of CU-Boulder and Michael Berry of CU Denver is based on more recent economic data than their original Aug. 22 prediction. The model itself did not change.

    While many election forecast models are based on the popular vote, the model developed by Bickers and Berry is based on the Electoral College and is the only one of its type to include more than one state-level measure of economic conditions. They included economic data from all 50 states and the District of Columbia.

    http://www.colorado.edu/news/releases/2012/10/04/updated-election-forecasting-model-still-points-romney-win-university

    • You (and colby) should be looking at the Princeton Model – they’re predicting an Obama 319-219 win and the probability of that hapenning at between 98 and 99 %

      http://election.princeton.edu/

    • The Colorado study just looks at the economy, not polls of voters (real, likely or registered).

    • Oh dear, it’s you again Tony. This time you don’t sound like an idiot (as you did below), just trying really really hard to believe what you want to be true. Well, it wasn’t. The U of Co study from late August wasn’t close to being right and, well, Nate Silver was exactly right. Oops. What will you be looking at in 4 years?

  11. Nate Silver certainly isn’t the election forecasting oracle some people make him out to be, and there are legitimate criticisms of the way he constructed his model and its value in the political discourse. But I find Nate Silver’s style of punditry– data-driven, somewhat transparent, and full of caveats about the weakness of a statistical model– much more palatable than much of the rest of American political commentary.

    Compared to the other prominent voices in the political media, there’s something incredibly appealing about Nate Silver’s honesty in his analysis. You’ve got guys like Chuck Todd, who make vague comments about momentum based on “inside information” that can’t be disclosed. You’ve got guys like Dick Morris, who often gets even the easy calls comically wrong– among his many failures, he famously predicted in 2008 that Barack Obama would carry Arkansas, a state that John McCain won by 20 points and was never really contested by either campaign.

    People like that– very serious political professionals– never fail to find newspapers and cable channels willing to host their political “wisdom,” whereas Nate Silver faces enormous pushback over failures that are still an improvement over what passes for analysis in American media. It’s a double standard I just don’t get.

    So, no, I don’t want to coronate King Nate, but I feel like much of the non-statistical criticism directed at him is misguided when he operates in a media space filled with so much worthless hot air.

    He was dramatically wrong about the UK elections. So what? He made an effort to analyze a campaign in some systematic way, and flaws were exposed in that system. There’s a lot more that can be learned from his failure, and a much bigger opportunity for future analysis to build from what he started, than anything else I’ve seen in political media. I call that a net positive.

    • I agree, and this is a big problem with Silver critics like the writer of this article. Silver gets things wrong sometimes? Fine. Does he do better than Dick Morris or the twits on CNN or the gossipmongers at Politico? No one’s going to write an article like this about Bill Kristol, are they?

  12. Colby – you fail to answer the most basic question: Why should we care how accurate/inaccurate pre-election forecasts are?

  13. There is an old saying in analytics. Garbage in. Garbage out.

    Baseball player performance data is rigorously recorded and maintainted.

    The polling data is not rigorous – to put it mildly.

    If Silver is using voter turnout data models from 2008, which is embedded in most polls, without making adjustments, then just toss out his analysis.

    • This is true. Projecting an event that happens every 4 years based on previous every 4 years events with only polling data in hand is a mug’s game.
      Take for instance, Silver’s reliance on the past track record of pollsters. Remember when Nik Nanos nailed two elections in a row almost perfectly and became the darling of Canadian pollsters? He did not change his polls’ MoE to +/-0.1%, and in subsequent elections has not stood out from the pack. I’m not dogging Nanos, but he was not any more right than he would have been had he been 1.5% off for each party. Two in a row might sound like a trend, but in Silver’s previous life he analyzed a sport in which 4860 games are played each year. And baseball still manages to baffle analysts regularly.

  14. The US system is like ours in that the popular vote overall may not translate into seats and therefore the most popular party may not form the government. Nate Silver relies on state polls over national polls. Sometimes polls can actually affect the results. When Nate Silver reports that Obama has a 85% chance of success that may cause Democrats to become complacent and if he doesn’t get out his vote he can lose some swing states that are too close to call. So Mr. Silver is a political weatherman . How often are the weatherman wrong when reporting weather two days from now ?

    But I digress, Mr. Cosh wouldn’t be writing this column if the weather prediction was going to be Romney on tuesday.

    • It can also make Republicans dejected and not go to the polls.

      Why bother when Silver says it’s a lock?

  15. In fact, Silver’s April 29, 2010 prediction of the British election had the LibDems getting 120 seats, twice as many as they actually got (57). That’s about as wrong as you can get.
    Silver isn’t more accurate than other predictors; he simply has a bigger megaphone (the New York Times) to publicize his predictions.

  16. What odds is Silver offering? Should be 3/4 to 1 on Romney win and I would take those odds. Romney is in Pennsylvania today which is either cheeky or something is going on with electorate that polls aren’t telling us.

    Obama is going to be one term pygmy president and will be in the mix for worst president ever discussions. Only a socialist could make a guy who wears magic underwear seem like sensible one.

    • More to the point, Bill Clinton is making four appearances in PA today. And, Obama is in NH. If Obama is up as far as Silver claims, what is the O campaign’s strategy here?

      • He is up due to the Electoral College. Otherwise, it’s pretty much a tie. I’m sure he would prefer to enter a a second term not having lost the popular vote.

    • Gee, Tony. You’re either really cheeky or just plain stupid. I’ll stick with the latter.

  17. As one of the forecasters who took on Nate Silver in the 2010 British General Election “nerdfight” mentioned about I have mixed feelings about the current Nate debate. On the one hand, I agree that Nate’s opacity and the excessive complexity of his models are a bad thing. Clarity and accountability are better.

    On the other hand, I think election coverage is much improved by people like Nate being involved. His model isn’t very scientific, in the sense of being clear and simple, but it is at least a wholly data driven analysis, and as such is a vast improvement over the predominant media narrative which is built from hunches, “savvy”, campaign stop impressions and unreflective reporting of campaign managers’ spin. Not to mention frequently stunning innumeracy and selectivity in the reporting of polls. The emergence of Silver and others like him is likely to improve political reporting, overall.

    That said, there are a couple of things about his approach that are clearly unscientific. His models are opaque, often seem to involve subjective judgement, and are excessively complex. They can seem to resemble statistical Rube Goldberg devices, where lots of fancy bells and whistles are added whose value is questionable. We never know for sure because Nate keeps the model’s details “propreitary”, a practice which is clearly unscientific. We take a low view of doctors who claim to have a new cure for diseases, but say the details are “proprietary”.

    Hiding your formulas is bad practice, and it is in my view a little hypocritical of Silver to call out journalists for relying on subjective judgement when, for all we know, he is doing the same thing somewhere in the wiring of his model. In general, academics prefer simple and clear models, and provide full disclosure of how they work. These models can, and do, go wrong, but precisely because they are clear and simple we can figure out why they went wrong, and improve them on the next iteration. Which is how science is supposed to work.

    In short, I think Silver is a good thing for journalism, but it is misleading to call him a statistician or a scientist. He’s something else entirely: a data journalist. He’s a very bright guy to have spotted the gap in the market which opened up thanks to the easy availability of data, which mainstream journalists have no training or inclination to use. But the incentives facing the journalist are different to those of a scientist: he wants to be read and he wants to be right. He is not so bothered about showing you what he’s doing or why.

    The problem, I think, is more with the acolytes of Silver who treat him as some sort of infallible super-nerd. He is not, but he sure looks super-smart next to David Brooks or Dick Morris. In the land of the blind, the one-eyed man is king! What those who champion Silver should remember is that blind faith in numbers is not much better than blind scepticism of them. In that sense, its important that we have a more varied data journalism ecosystem. Which is why it is encouraging that people such as Simon Jackman (at HuffPost Pollster), Sam Wang (Princeton Election Consortium) and Drew Linzer (votamatic.org) have become involved this campaign. Plus of course the excellent old hands at pollster.com – Mark Blumenthal and Charles Franklin. High quality, but accessible, writing about political data and forecasting is valuable to everyone. Ignorant cheerleading of one opaque model, not so much.

    • It supports the cause of Obama and scientific socialism.

    • I agree that it’s not at all ideal for his formulas to be unknown, and also have a preference for trying to predict outcomes with as little complexity as possible.

      But with regard to the first issue, can you really blame him? He is making his living off of this stuff, so why would he want to give out his formulas for anyone to use?

    • I see a lot of the tacit defence of Silver’s work share the theme of: “Well at least it’s math. At least he’s using quantifiable data. At least it’s not some pundit’s gut feeling.” The arguments about whether Silver’s model is actually subjective or scientific aside, how do we actually know that that his approach is more accurate than a pundit’s gut feeling? If you were to take the top 100 political journalists and get them to predict the outcome of, say, the next 10 elections — could you be absolutely sure Silver’s model would out-perform them? I guess I’m proposing the human equivalent of a Marcels test.

  18. I wrote an article about Silver that dismissed his worst critics but acknowledged a few of the problems that you bring up:

    http://updates.deadspin.com/post/34780905169/nate-silvers-braying-idiot-detractors-show-that-being

    Though I don’t mention it in the piece, I’m very much a fan of Marcels and Tango’s work/critiques in general. Silver’s recent forays into sports, especially his March Madness predictions, showcase his greatest weakness as an analyst: complication standing in for sophistication. The layperson attributed the model’s relative success to his fiddling with injuries, suspensions, and other minor details, when in reality a much simpler model, like Ken Pomeroy’s, could have performed equally well (perhaps better).

    I see the problem in this whole debate, on both sides, as one of statistical illiteracy. I know my way around a hypothesis test, but that gives me the authority of a second-semester college freshman, not of a statistical savant. If Silver’s methodology is flawed, and in many areas it is, he’s blazed a trail for someone to beat him

  19. It’s curious that a Canadian has written the most insightful article I’ve yet seen about Silver’s work. How does that happen?

    • “the gen x’ers” were well educated, well-medicated/doctored, and have less fear over hand gun wielding maniacs—hahaha -not really we have just as many guns, for huntin’ though….

      can’t write that this article is any less confusing than Nate SiIvers material…

      gen y? oh my gosh not that Cartesian Cogito (Sans Dualism) again

      “n” = sentence, “fn” = fol

      “dribble, dribble, he shoots he scores” / fn = ‘basketball reference’, and/or ‘a sports quote with sarcastic innuendo’, and/or, et al.

      • Yes, I see.

  20. TL/DR Nate Silver’s lucky

  21. One can certainly quibble with Nate’s model, which is true of any model designed to try to predict future events. But at issue here for me, and I suspect most Nate “defenders,” is the bigger picture.

    Which is more valuable in gauging the status of a race, the results of a statistical model like Nate’s, or the so-called expert opinions of partisan pundits like Joe Scarborough, who says stuff like the following?

    –”my gut tells me this is a tied race” (despite the large majority of swing state polls favoring Obama)
    –”my sense is that Romney has the momentum” (despite the polls actually moving slightly toward Obama at the time he’s saying that),
    –”no one really believes either candidate has more than a 50.1% chance of winning” (despite many intelligent, thoughtful people like Nate, Sam Wang at Princeton, and the group at Stanford doing extensive work on this and finding Obama’s chance to be above 80%)
    –and “anybody that thinks this race is anything but a tossup right now is such an ideologue, they’re jokes.”

    That’s what lies at the center of this “battle” between Nate and most of his critics. Most of those who are lashing out at him are not quibbling with the specifics of his model, most do not offer any sort of reasoned explanation as to why the model could be off, and most do do not offer an alternative model that they think might be more accurate. Most do not criticize Nate’s model based on any sort of science or reason. Most, instead, appear to be criticizing Nate simply because his model is producing as a result that is counter to the spin they’ve been hearing, or their “sense” of the race, or their personal preferences. *That’s* where those statements are coming from, that some seem to be anti-science and anti-math.

    So although I don’t have any qualms with what’s stated in this piece–I rather enjoyed reading it, actually, and found it to be quite informative–I think it misses the major issue that underlies this debate, and the major issue at the root of the highly publicized back-and-forth between Nate and Joe Scarborough. At issue is the use of Nate’s approach, in which he uses an analytical model based on concrete data in an attempt to capture the state of the race in an objective way, versus what appears to be the preferred approach of others, which is to have the debate center on subjective opinion and campaign spin.

  22. Interesting how many people complain about the mathematical models being “not perfect”.
    This make me wonder: how many people here believe the mathematical models of climate science?

    • mathematical models are well “MODELS”, the map is not the territory. climate change is ‘obviously’ real, one requires no models to see which way the wind blows.

      i appreciate your comment, however, one must include natural, and very significant global warming issue factors, such as the volcanic trench, releasing tons & tons of methane – just off Greenland (something like this is “alleged” to have caused the Permian extinction) -though I am unfamiliar with the scientific data and material produced from peer-reviewed and published studies.

      it is really a mute point, because there are ‘climate change models’ that include this factor -they’re just a lot scarier than the ones that don’t, that’s the point.

      • “Mute” point, huh?

  23. I love this argument. Actually Colby, you could have eliminated the first 2,000 words and just left your conclusion. Oh how witty and cutting you are, Nate is just lucky. After you had just made the tortured case that his predictions are not very good, awful in fact.
    This has to be the apotheosis of hackdom. Congratulations sir!

  24. Silver is on Obama crack, he is either crazy or senile.I suspect the latter.

  25. Nate Silver may turn out to be the Jayson Blair of oddsmakers.

    In this article from The Daily Caller, a quant duplicates the outputs
    of Silver’s vaunted “The Model” using Microsoft Excel, a Monte Carlo
    plug-in downloaded off the internet and publically available state
    poll results.

    http://dailycaller.com/2012/11/01/is-nate-silvers-value-at-risk/

  26. I read that, “Only 9% of sampled households gave an answer to pollsters in 2012″.

    One should ask, “Who are these 9%?”

    Coincidentally, the IQ categories for Borderline and Extremely Low make up 8.9% of the population.

    I figure the other 0.1% must be very lonely persons with time to kill, or junior high school student pranksters, and ignore polls.

    • Then you have people like me who routinely lie to pollsters for fun. I assume I’m a rare even though.

  27. The biggest challenge is that they are using polls that are skewed. Pew says that it only get 9% of the calls to actually respond compare to over 38% in 2000. When polls show democratic enthusiam greater than the landslide of 2008, the polls need to be viewed with caution. Nate hates Rasmussen and discounts Rasmussen and Gallop, but they are closer in line with demographics than PPP, Quinipiac and Marist.
    The difference this year with Silver’s predictions is the inputs appear to be tainted. Obama may win, but it isn’t a 87% chance looking at the latest polling demographics.

  28. Did he do the Canadian elections which had the conservatives losing?

  29. I’ve spent some time crunching the numbers (number of major party candidates for president, number of winners of the election) and I predict that Nate Silver has a 50% chance of picking the winner.

    • And don’t forget, a 50% chance of either pleasing or pissing off the candidate with his predictions.

  30. As political models go Silver’s (from what we can see of it) is mildly innovative in that it provides a probabilistic prediction based on the polling snapshots.

    The conventional polling is just about the snapshot mapped onto the Electoral College. And, as such, there is very little it can do predictively. In effect, a poll is (to a greater or lesser degree) a picture of the state of the race at a particular point in time.

    Now there are tons of clever things you can do with snapshot polls if you want to spend the time. For example, you can come up with a rank for each poll relative to its past success. Or you can look for trends within a particular poll or how well it correlates with other polls. And, for fun, you can dump all that information into a big honking regression analysis with an artificial dichotomous independent variable and push the big red button: boom, a probabilistic prediction. Fun the whole family can enjoy.

    Does it predict electoral outcome?

    Realistically, we don’t have enough data points to say anything very robust on that score.

    A couple of hundred iterations and you might have a sample which you could analyze and come up with realistic error calculations.

    For now the description of Silver as a “data journalist” seems about right. Nothing wrong with that.

  31. Silver applies statistical methods appropriate for independent probability events. The problem with that is that voting behaviour is not the slightest bit independently randomized. If voting behaviour was random, then his methods would be accurate. There really are no accurate statistical methods available to model human voting behaviour.

    Additionally, his methods assume that incorporating many polls is better than one. The theory is that the flawed polls will be outnumbered by the more accurate polls, and that the law of large numbers will apply, reducing error. However, this is not true, because in the world of polling, most of the pollsters make the same mistakes. For instance, almost all polls in the past Canadian federal election understated Conservative support. So the aggregate was no more accurate than the individual polls.

  32. As you say, “Silver is a terrific advocate for statistical literacy.” It’s a shame that you don’t fully recognize the value of that talent… For many of us who can’t do the math but are interested in reading well-written articles on the polling data, his blog is much more fun than the alternatives. Not because it’s necessarily more precise, but because it makes for better reading. 538 helps me actually understand the polls, and a little bit of the math, and a little bit about the demographic makeup of the US. It’s great!

  33. Nate Silver = the Milli Vanilli of election analysts. Seems poor Silver was a one-hit wonder who was then found to be less talented than he appeared. That is, access to those 2008 Axelrod models were the equivalent of MV’s uncredited lead vocalists. Almost time ime to yank Silver’s analytical “Grammy Award”.

    • So what do you have to say now?

  34. Excel formulae? Those of us who actually work with stats on a regular basis know that Excel’s stats package is, er, interesting.

  35. I’ve seen this scenario before; uber geek builds unique software/engine/operating system/commodities predictor that makes the geek look like Houdini with predictions/results that no one else comes close to. (and neither does the home team seem able to comprehend the ins and outs of this “magic box” except for the uber geek.)

    Uber geek gets mucho lucrative offers and takes one leaving the home team to struggle on.

    Suddenly, it dawns on the home team that they can’t make it work. The results just don’t come out for them for some reason. “hmm. they say. It’s a scientific process isn’t it? can’t get much more scientific than math. Right?”

    Turns out uber geek has been manual tweaking the inputs and outputs according to some inner intuition or third eye chakra awareness and is now supplying this “ability” in his new post to his new “magic box”.

    Whether this is proof of esp or that the mind is incomprehensibly complex and can actually outperform silicon somehow, I don’t know but I do know all gamblers who have a winning streak, eventually have a losing streak.

    I think Mr. Silver is due for a negative adjustment to his reputation.

    • lol

  36. So, Nate correctly predicted that the UK Conservatives would win by a substantial margin and they did, but – as everyone could see – he didn’t have perfect numbers and checked the BBC exit polls! OMG! What if his prediction that Obama will win turns out to be right, but
    again his numbers aren’t exact? What if I find out that he actually looked at last-minute polls? Will I have to dump my brave, smart, world-famous Nate and find a blogger?

      • Given that you also call yourself a blogger, I’d say no. The reactionary opinion pieces you live on are pulled out of your [a]ss and fluctuate with how “edgy” you feel you need to be in order to get noticed.

        Nate Silver made / makes his living composing algorithms, selling them and writing about them. You? Not so much.

  37. Silvers tries to hide behind “statistics” in making his 79% prediction. The problem is there isn’t any way that there is significant statistical data to make such a prediction. His claims are more akin to Las Vegas odds makers than any probabilistic model. To believe it is so, you must believe that there is enough data to make such a claim. But there simply is not. In baseball you have enough plate appearances and innings pitched to make some reasonable statistical guesses about future outcomes that are similar to previous outcomes. IN electoral politics, you have 3 or 4 data points (in my presidential electoral forecasting I use 4: 96, 00, 04, and 08). I have the head to head vote Romney 50.8 Obama 49.2, and that projects 285 electoral votes for Mitt Romney. But, the forecast is as much of an art as anything else. Here are some things that need to be considered: favorite sons influencing outcome (Arizona in 2008 outperformed for McCain), one time outliers (2008 Indiana, North Carolina), trends (1996-2008 Colorado). A forecaster has absolutely no statistical certainty on how to look at such outliers and trends. Is the outlier a start of a new trend? Does the trend continue, level off, or reverse itself (I suggest in COlorado the Democratic vote trend has actually slightly revesered itself). You simply do not have enough data points to make ANY statistical conclusions. It is all a guess.

  38. Silver’s and Vegas’ numbers haven’t matched over the past 2-3 months that I have been tracking. Silver has been around 10-15% higher on the Obama side winning, though recently the difference has been shrinking (around 6-7%), but there is still value to betting on Obama if you feel that Silver’s numbers are the gold standard.

    • Vegas odds are determined by how much money is bet, not by the probability of the winner. Bet $10M on one candidate and watch the odds change. The odds for the other candidate will get better to attract more people to bet on them to balance out the money. The Vegas line tries to keep the amount of money bet on each side relatively equal. That way the house ALWAYS wins when due to the difference in spread.

  39. Yea right, Silver’s analysis is so great because it is based on pure mathematical analysis. It is easy to play with one’s model when the Obama campaign is sharing inside polling information as they did in 2008! That’s why his model crashed in 2010 when he did not have access to such data.

  40. Bloggers really messed up when they started whipping up the epistemological argument in the context of journalists vs scientists. It was like being inside Hunter Thompson’s head, trying to figure out which drug was taken, and which lobster to go to for advice.

  41. Has any pollster EVER claimed their results have the slightest predictive value at all? I’ve never heard it – they all say something like “snapshot in time” or the like.

    Can a series of snapshots showing, say, a man walking ever closer towards a door predict whether or not he goes through it? Whether he will knock or use a key?

    Silver isn’t a seer; he’s an alchemist.

    • To use Silver’s analogy, suppose the NY Giants are up by 3 with 2 minutes to go in the game, and have the ball 3rd and 7 at their own 40.

      Are you claiming that nobody in his right mind would make predictions about who wins? That anybody who put money on New York was practicing fatal over-confidence?

      The presidential race is NOT a lock, but we Americans WILL choose a victor according to our rules (as amended), and pollsters have now queried a fairly healthy subset of us. Why claim that none of that means anything?

  42. The Fine Article wrote, “if his model…is inherently prone to getting the easy calls right and blowing up completely in the face of more difficult ones.”

    As a modeler myself, I expect models to get things right when the inputs are relevant, and to perform more poorly when new patterns (e.g., a hurricane) emerge. After all, a model is by definition merely a consistent way of structuring and evaluating inputs. Absent some greater insight (i.e., knowing that God wants Romney to win), the standard we should set for models is that they are consistent and reasonably incorporate information that is relevant.

    So I find the criticism glibly facile. Go ahead and tell us which calls are easy, and which are more difficult. Tell us what’s wrong with the idea of using multiple, state-by-state models, averaged together so as to minimize the apparent (and fairly well-known) inaccuracies or biases of individual models. Tell us which states a 52/48 result is a 70% certainty for the 52 percenter, and which the same numbers are maybe 55% good for the leader. Because Silver does it, and while we all want to think we’re smarter than the next guy, few of us put up a corpus of work to be evaluated.

    You can do it; you’re the expert, right?

  43. What does how unwieldy the spreadsheet may be have to do with the methods used? I’m not sure if you don’t understand the difference between logic and the implementation or if you are just looking for something to complain about. I was with you when I thought you were saying that his model predicted crazy things (Wieters) but then you finally come around to saying it was just a bug (the coefficients). I don’t know how Nate Silvers works but I’d guess by now he’s hired a savvy intern to implement the software and he’s focusing on the model.

  44. 2000 words to argue that Nate Silver’s code is bloated?
    Irony, much?
    The point of the ‘Reintroducing PECOTA’ piece, which I read, was that the code was unwieldy, not that it didn’t produce meaningful results. In terms of value-added, the elegance of the code is a non-issue. Who cares?
    The other point – Silver’s models are too complicated, without much additional accuracy relative to other aggregators – is a more reasonable critique. You don’t particularly make a strong case for it but it’s more salient than how klugey the code is. Your argument’s not particularly compelling because your approach is the standard one – point out a few instances were Silver was wrong, other aggregators were right, and call it a day.
    One of the most valuable things that 538.com has done is that has undermined the basic template for political journalism, which has a lot in common with your approach – cherry pick a few pieces of data, and speculate about any number of poorly informed intepretations as a result. (Recent example: Romney’s continuing momentum in the polls. These stories tend to get echoed and amplified but they’re harder to support when quantitative data shows that Romney’s gains in the polls dried some considerable time in the past.)
    If you actually want to compare Silver’s model to others, than make your own meta-aggregator, and quantitatively compare how far off the various models are – build your own spreadsheet! write your own code! – in the many predictions they’ve made in the past. There’s a reasonable amount of data out there – the Princeton site run by Sam Wang has been making predictions for 8+ years now – and even a single presidential election offers 50 state predictions to analyze. If Silver’s model is consistently further off the mark, then your argument has some legs.
    But spare us your next 2000 words until after you’ve built the spreadsheet.

  45. It’s really is bizarre to attack Silver by comparing him to Tango or election predictors like Wang. As you say, in baseball or politics, they reached similar conclusions. The complexity of either one’s methods seems to be mostly irrelevant.

    The really important comparison is to traditional reporters and media personalities (like those of MacLean’s). Both of those groups are obsessed with creating narratives and horseraces where none exist, and rely exclusively on their gut, never data. Just look at all the hacks in the states still claiming the election is a “toss-up.” Compared to morons like these, Silver is a breath of fresh air and adds a lot to public discourse.

    Complaining about probabilistic predictions is stupid, as well. That’s how statistics work.

  46. Colby’s going to look a little foolish if Nate Silver’s algorithm is correct..

  47. Well, as of tonight, Nate is all in at a 92% chance of an Obama victory. 9:1 http://jaycurrie.wordpress.com/2012/11/05/datapoint-3/

    I would be delighted to take Romney at those odds.

    I don’t for a minute think Romney is a longshot but if someone’s model spits odds like that up…well.

    Silver is all in here: Obama wins he is a hero, Obama loses and he is a 92% dick.

  48. In regards to the British 2010 General Election when the result of the exit polls was announced none of the 3 main channels, BBC, ITV or Sky took the results seriously – no one though that the Lib Dems were going to lose seats like that. Given its, now known, accuracy it is hilarious looking back to see how fast all the pundits on the coverage tried to distance themselves from it.

  49. To repeat what others have said, the question is not whether Silver’s model perfectly forecasts the election results or how elegant his mathematics are. The question is whether his model (a) forecasts election results better than traditional political journalists talking about the number of lawn signs they see, cherry-picking single polls (“Gravis is the gold standard of Michigan polls, Bob.”), or repeating inane campaign talking points about “momentum,” and (b) forecasts election results better than other polling aggregators in the market (e.g., Wang, RCP, Votamatic, etc.).

  50. I would also wonder about cherry picking with Matt Wieters. How does PECOTA work for the majority of players? No systems is going to be 100% perfect. The question is how much better is PECOTA than other systems? Remember in the end all these are just probabilities.

  51. Interesting criticism, but the perspective you fail to bring is that Silver is the first widely-read commenter to actually combine polls in a way that attempts to be comprehensive and objective. No doubt his model could be improved on, like Pecota. But before the omnipresence of Silver, you had papers reporting the Gallup poll one day, a PPP poll of Ohio the next, with each day bringing wide variance. This, of course, sells papers — the idea that the election is swinging wildly each day. In fact, it most likely is not, and that’s the point Silver makes almost every day. This point alone is novel and valuable in US political analysis.

  52. I think it was David Baddiel who claimed 88.9% of statistics were made up on the spot. Defend Silver how you will (and I recognise Cosh is critical) but his claim that an Obama victory is 86.2% likely seems to me to prove Baddiel’s point.

    • And now?

  53. Oh yes, Silver bet Joe Scarborough $1000 that Obama would win? Sorry, not impressed. For guys like Silver and Scarborough $1000 is chump change. When Silver starts offering $100 000 then I’ll be impressed.

    • You do realize it was for charity right?

      Also, with Silver’s reputation on the line, he had millions in book deals, speaking fees, etc. at stake.

  54. Hey Cosh, Silver just nailed it. How about leaving town and never posting anything ever again. Or, better yet, admit that you are a broken down villager hack with no real skills or redeeming features.

  55. Like all know it alls you got it totally wrong. It looks like Silver picked 51 of 51. Now tell us why you are better at what Silver does than Silver. Most know it alls are like you. They don’t know shit.

  56. Whoops… Seems like Nate’s system worked…

  57. Would love to see an update in the light of Silver’s prediction and last night’s US election results.

      • Your nonsense. You attempted to slander the man’s name for purely partisan reasons. Did you dislike him this much when he predicted the GOP 2010 House takeover? At least have the decency to admit that he’s more than just plain lucky.

  58. C’mon Colby, aren’t you going to follow up on this?

    Was Silver just lucky this time as well…..?

  59. I expect a retraction of this slanderous tosh.

  60. As a programmer reading the quotes, I think you have totally misinterpreted those PECOTA blogs – what they seem to be saying is that Silver is a terrible **computer programmer**. But they maintain respect for him as a statistician – thus they want to be “wringing the intellectual property” out of the spreadsheet. The smartest researchers can make absolutely terrible software engineers, and vice versa.

    It seems small-minded to focus on tearing down Silver instead of trying to spread what went right. He might be no Einstein, but even Einstein was allowed his blunders (quantum mechanics comes to mind).

  61. I enjoyed this article. I wish I would have come across it before the election. For those that are demanding a retraction I think you are missing the point. The point of course is the question “is it more accurate than the most simple of methods”. Any monkey could simply average all of the poll results, and they would have nailed the election with a margin of error of Florida. So the short answer is probably no, it’s not better than Marcel. But that kind of misses the point too.

    The value in Silver’s model is not it’s prediction right before the election, it is it’s prediction 30 or 60+ days before the election. But because it’s a future prediction it’s almost impossible to verify it’s accuracy. At one point, on October 13th, the model gave Romney a 40% chance of winning. How can you possible verify the accuracy of that? You would have to go through all of his predictions that showed showed a 60%-%40 split with 3 weeks left to go and see if the underdog won roughly 4 out of 10 times. I guessing there’s not enough data points to do that. So we may never know.

    Also be aware the Nate Silver is not always right. In 2010 he predicted Sharon Angle would win our Senate seat by 4%, and Harry Reid won it by 5%. He was off by 9 points! But just about every poll was biased toward the Republican by about 9 points, so Marcel would have flubbed it too.

    One more thing, 100+MB Excel sheets are pretty ridiculous.

  62. What a jealous pig you are. Sad to be a footnote, I know, trying to claw your way to relevancy by criticizing others. Too bad Silver hit it out of the park again. Enjoy your crow.

Your email address will not be published. Required fields are marked *