by
Sherman Dorn © 2012
The takeaway from the Nate Silver-punditry smackdown this fall is not that any quantification and set of algorithms beat the pants off your nearest broadcast yodeler, but that quantification done well will beat the pants off your nearest broadcast yodeler, and that your nearest broadcast yodeler presents the equivalent of the Washington Generals in battles with professionally competent quantification.
For the best argument about this, go no further than Nate Silver’s new book,
The Signal and the Noise, which is largely about modesty in quantification and the difficulty of constructing accurate prediction systems. If you want a panegyric to algorithms, you instead need Christopher Steiner’s
Automate This. Steiner is entertaining and should be the publicist for Sal Kahn, the
School of One, and so on, but Silver is more realistic.
In particular, Silver addresses uncertainty in an explicit and transparent manner, in both his analysis of polling and in his discussion of predictions more broadly. The largest gap between Silver’s approach and the public discussion of quantification in education is the almost complete failure of both reporters and policymakers to address uncertainties in an open manner.[1] If any advocate, policymaker, or pundit uses Nate Silver (as an object lesson) to argue in favor of current practices in test-based accountability or the use of point estimates in any proposed policy, they are demonstrating that they haven’t read his book. To wit, the title of this entry.
Wonkish regret/adjustment: Silver also makes a wonderful argument in favor of probabilistic reasoning, specifically a Bayesian approach to statistics, though I suspect he will be less successful in that argument. Frequentists won the professional debate 100 years ago, and the standard introduction to statistics is rooted firmly in frequentism (quick: do you use the term “confidence interval” or “credibility interval”?).[2] But in addition to the dominance of frequentist approaches in professional training, it is also very difficult to think about probability in the abstract and more specifically to reason through quantification in the frame of conditional probability (the main engine of the Bayes theorem). If you want to test your ability to reason abstractly about conditional probability, see how much you resist the basic solution to the Monty Hall problem. Trust me: humans are pretty awful about this, even with quite a bit of education.
Fortunately, you don’t always have to think abstractly about conditional probability to use it. At least in relatively simple cases, there are two ways to get around our brains’ general incompetence at probabilistic reasoning: using real numbers in hypotheticals (for basic questions of conditional probability) or using a moderately-sized set of simulations to understand the dynamics of a simple system (what I used in September to look at the Chingos/Peterson research on whether the privately-funded voucher program they studied had consequences for college attendance). But we tend to have this blind spot and need to know how to get around it in some way.
Where Silver is inconsistent: Silver is less transparent in his own practice about building models and using human judgment (his exact models are proprietary), but both appear in the book. I’d pay more attention to his book than his practice here, at least in terms of using quantification in practice. Silver correctly sees landmines everywhere for those wanting to predict the performance of ballplayers to earthquakes, and he argues that the remarkable success in weather forecasting has depended on both increasingly detailed information about the atmosphere and also the human judgment that forecasters use in making predictions about tomorrow’s weather and the next three days of the tropical storm track. For that to make sense for public policy, the models should be public.
Notes
1. Researchers also tend to believe that their research findings are more accurate and trustworthy than they are, something Silver discusses in his book. Yes, research psychologists have studied the extent to which research psychologists are numerate.
2. Silver’s academic/training background is an undergraduate degree in economics from the University of Chicago, and then four years of work at KMPG.
Source: Dorn, S (2012, November 7),
“I read Nate Silver. I’m a fan of Nate Silver. State senator, you’re no Nate Silver.” Sherman Dorn.
Republished with kind permission of Sherman Dorn © 2012
Related Posts