Hide this

Results from Google Books

Click on a thumbnail to go to Google Books.

The signal and the noise : why most…

The signal and the noise : why most predictions fail but some don't (edition 2012)

by Nate Silver

MembersReviewsPopularityAverage ratingMentions
1,359475,652 (3.93)22
Title:The signal and the noise : why most predictions fail but some don't
Authors:Nate Silver
Info:New York : Penguin Press, 2012.
Collections:Read but unowned
Tags:nonfiction, statistics, library

Work details

The signal and the noise: Why so many predictions fail—but some don't by Nate Silver

Recently added byTan_Nguyen, 1776lib, private library, ibinu, saeclavincere, pammosk, emoulding, vipulmathur
  1. 20
    Thinking, Fast and Slow by Daniel Kahneman (BenTreat)
    BenTreat: Integrates some of the analytical techniques Silver describes with common irrational patterns of decision-making; Kahneman's book explains how to use some of Silver's techniques (and other tools) to avoid making decisions which are not in one's own best interest.… (more)

Sign up for LibraryThing to find out whether you'll like this book.

No current Talk conversations about this book.

» See also 22 mentions

English (46)  German (1)  Danish (1)  All languages (48)
Showing 1-5 of 46 (next | show all)
The first half dissecting why so many forecasts and predictions are bad was great, but in the second half that attempts to explain how to make predictions better it provides very little insight and falls apart when he gets into overlong explanations of Bayesian theory and starts making up probabilities out of thin air in order to prove his point in examples (the odds of a terrorist flying a plane into a building in Manhattan on Sept. 11, 2001 is 1 in 20,000?). It was quite disappointing to see this turn after the rest of the book was so heavily footnoted and documented, and it was also completely unnecessary and somewhat gratuitous. Why? I quickly gave up on it at that point. ( )
  jaydro | Jan 24, 2016 |
Definitely worthwhile reading. ( )
  joeydag | Jul 23, 2015 |
Tim Geithner cited this book in his memoir so I know Silver's writing has been influential on at least one policy maker. I read this book partially out of curiousity over how smart Silver really is when it comes to economics and statistics. This book came after Silver's FiveThirtyEight.com blog successfully predicted Presidential and Senate races and publishers wanted to capitalize on nerdy books like Michael Lewis' Moneyball. While Silver's election forecasting has been lauded, I never found it much more than novel-- he explains in the book that he simply took an average of others' forecasts, weighted by their past accuracy. The digging through data was more impressive to me than the results. I strongly encourage this book to anyone interested in forecasting, especially as applied to economics and policy making in areas such as climate change and financial regulation. It's also a good read for those starting a business or for CEOs looking to push back on their internal forecasters. Prerequisites before reading it are Michael Lewis' Moneyball and The Big Short; Bob Schiller's Irrational Exuberance, Daniel Kahneman's Thinking Fast and Slow; and I would also recommend Benoit Mandelbroit's (Mis)Behavior of Markets. I would also recommend the climate change chapters in Dubner & Levitt's SuperFreakonomics.

Silver intends the book to be an investigation of various data-driven predictions. He is also proselytizing in the name of Bayesian analysis with the goal of leading the reader think more probabilistically. Silver writes that we can all improve our predictions by adjusting them when new information arises. This may seem like common sense, but I forecast for a budget office that has to project quarterly tax revenue two years in advance and doesn't have the luxury of regularly updating the published forecast when new information comes in (a real problem when the average retail price of gasoline comes in over $1/gallon below what anyone was forecasting even a year ago). it takes both courage and humility to be Bayesian when our media culture often hammers people for "flip-flopping" on issues. Bayesian thinking uses prior estimates as a starting point, and changing them as you encounter new information.

Perhaps what I like most about this book are the interviews Silver conducts with people ranging from NASA scientists to economists to Donald Rumsfeld. He converses with Justin Wolfers over his critiques of Silver's predictions at FiveThirtyEight.com. He talks with forecasters about their forecasts, theories, problems, etc. even though he already knows a lot about the field. I work as an economic forecaster for state government, and I see the best practices, the most common mistakes, and the heuristic biases that Silver describes in detail.

Silver begins with a seemingly odd-fitting hypothesis: as Gutenberg's printing press made books and knowledge more widespread, conflict increased as people felt they had more control over their own destinies. As we have more information/data, we know less of what to do with it. We pick and choose which data we prefer and become more tribal, more hostile to other tribes who focus on a different set of data.

The terms forecasting and prediction are currently used interchangeably but had subtly different meanings with theological implications in the Middle Ages. Even today, seismologists say earthquakes cannot be predicted, as "predict" means a set time and date. But they can be forecasted, meaning that a forecast is a probability of an event, usually over a range of time. Forecasts are made in uncertainty. The U.S. has a "prediction addiction" and a prediction problem. Predictions for seemingly important series--like GDP, inflation, and unemployment-- have been wildly inaccurate. The important economic variables most frequently forecast tend to be consistently wrong. Silver recounts the housing bubble and 2007 financial crisis where CDOs were being AAA rated by ratings agencies who should have known better. Some economists acknowledged the housing bubble but did not accurately predict the consequences of its bursting. In this lengthy section, Silver cites Schiller, Rogoff & Reinhart, Larry Summers, Dean Baker, Paul Krugman, and others. Silver chalks up the ratings agencies' errors to the common forecasting error of not having a large enough sample size, making later observations appear much more improbable than they should be. Many of Wall Street's forecasters' models only went back to the 1980s, and missed the simple fact that real housing prices did not appreciate very much over the long-haul, not to mention several recessions in American history.

Silver performed an amusing survey of The McLaughlin Group's weekly forecasts and found them to be no better than flipping a coin. He looks at how experts in various fields tend to be inaccurate in their forecasts. There are "foxes" who know something about a lot of things and "hedgehogs" who know one big thing. Hedgehogs make good TV guests but are not as good at predicting, studies have shown, as foxes. (The most recent example of this I've seen was a finding that various ivy league experts' predictions on Russia and foreign policy were more inaccurate in their predictions of Russian aggression toward Crimea than less-credentialed experts or experts in other fields.) Silver remarks that good forecasts are not just purely data-driven, more and better data help but sometimes not all that much. In politics, an incumbent running in a district solid for his party might suddenly be trounced at the news of infidelity or corruption, something a purely statistical model wouldn't predict.

Silver cut his forecasting teeth on Major League Baseball, designing a system (PECOTA) to forecast draft picks and minor leaguers' potential output. PECOTA did okay against scouts but not fabulous, and Silver sold the system to Baseball Prospectus while he went on to publish books and start FiveThirtyEight.com. It's easy to conclude that a little bit of computer know-how can give you a huge advantage, but Silver states that's not what he's intending to say. Better models may help you at the margin, but like any business, forecasting is competitive and people will adjust and take away your advantages.

What is needed is a good harmony between man and machine (Tyler Cowen picked up this theme in his recent book Average is Over). Algorithms cannot replace humans at forecasting completely, at least not anytime soon. Silver gives the example of weather forecasting, stating that humans add about 25% accuracy to computer models simply by using their eyes to identify outliers on the weather map, faster than computers or t-tests can. He evaluates the forecasts of NOAA and The Weather Channel, noting that as the U.S. government nicely provides weather data for free, for-profit forecasters compete in terms of accuracy. But the perception of being accurate is the most important-- the incentive is ratings, not accuracy, after all.

The government also publishes free economic data but it is messy, noisy, and constantly subject to revision. Economic forecasters don't publish confidence intervals for their forecasts because they are "embarrassed." As an economic forecaster, I've always wondered why we don't publish such intervals but Silver explains the history here. Silver does explain the problem of "overfitting" in forecasts-- putting in too many independent variables to fit the curve or too often being "fooled by randomness" (an oddly sly allusion to Nassim Taleb, who Silver leaves out of the book... there must be some history between them).

Silver writes of a successful gambler on NBA games who has a statistical model but also watches most of the games and makes personal judgements about how the team is communicating with one another, the effort they're visibly putting out, etc. This theme leads to a long exposition of poker and how gamblers have to quickly calculate the odds of opponents' hands given what you hold, what she has done. A computer would be good at this, but not at the aspects of bluffing which anyone who has watched Star Trek:TNG knows.

There is a detailed look at Kasparov vs. IBM's Deep Blue. Chances are that your chess game will result in a position that has never before been played or recorded. Machines programmed with millions of pre-played game data run out of history after a few moves, and have to formulate a strategic analysis of the game. Silver learns in an interview that an undiscovered bug in Deep Blue's program threw off Kasparov's estimation of the computer's ability and strategy when he was analyzing the match results afterward. This resulted in Deep Blue ultimately shaking Kasparov. Kasparov thinks more like a poker player at times, trying to determine if Deep Blue has a "tell" or is bluffing.

Silver writes that the average forecaster is still probably good relative to average guy on the street and he makes this point looking at poker-player data. He played online poker as a slightly above-average player, making money. When later looking at data he realized that the bottom 10% of players were so bad that they were subsidizing the average players. When the bottom 10% dwindled, the previously average players like Silver became the bottom 10% and lost. Apparently, 52% of online players have bachelor's degree, and are smarter than the average citizen who just buys a lottery ticket. This leads to overconfidence and a sense of entitlement, which Silver admitted to while playing. Poker takes more skill than roulette, but is still heavily dependent on luck. This segways into a comparison with stock traders who also suffer from hubris and a belief he/she is "above average."

Silver gives a good summary of Eugene Fama's efficient markets hypothesis and Richard Thaler's critique-- the "no free lunch" aspect versus the "price is always right" aspect. Silver channels Kahneman to describe how heuristics and biases affect buyers/sellers' forecasts. We should all be aware of our biases and working against them (Silver recommends Robin Hanson's blog for help) and purporting we have none shows we have many. Never trust a forecaster or scientists who states he has no biases.

From here, Silver looks at the enormously controversial yet important forecasting of climate change. While there is wide agreement among climatologists about the underlying theory, the warming trend, and causes, there is wide disagreement about the models used to forecast. This is important because the forecasts are often 30-100 years out and the margin for error quite high. There is contentious disagreement about the use of computer models. Scientists are dismissive of forecasters and models, where climate skeptic forecasters are dismissive of the science. Silver cautions that one should never trust a forecaster who is dismissive or ignorant of the underlying science behind the data he is forecasting, and never trust a scientist who is dismissive or ignorant of statistics and forecasting.

One problem with climate change over time, and betting on various models is that you can easily cherry-pick your start/end date to get a different result (are temperatures trending higher or lower?). Silver examines some of the forecasts and finds the IPCC's model (problematic for reasons he describes) as fairly accurate since the 1990s. Nonetheless, Bayesian analysis would suggest that people are correct to increase their skepticism about the warming trend in recent years, since the earth's temperatures have not warmed from 2004-2011. Each new data point should cause an adjustment of your forecast. Silver laments that we could have been having a debate about the uncertainties of the forecasts all these years, rather than a debate about whether the problem really exists.

Silver's faith in Bayesian analysis and lack of thinking through its logical conclusions is perhaps a weakness of the book. Bayesians like Silver say that our technological progress suggests further advancement is inevitable, and that we're converging on a point where we will seemingly be correct about everything; that we're evolving and will eventually achieve a progressive utopia. I'm reminded of Chris Hedges arguments against such thinking that quantum mechanics demonstrates some things will always be unknowable, and that world history shows no progress toward a utopia. There will always be randomness, there will always be noise mistaken for signal. Silver admits that the political polarization in America suggests our technological advancement is not inevitable. (He also touches on chaos theory throughout the book.)

An example of climatology frustration is that some simple ideas-- like putting sulfur into the atmosphere-- would seem to be something we can at least experiment with. Volcanoes give evidence that putting a small amount of sulfur into the atmosphere would likely reduce the greenhouse effect, but environmentalists clash with climatologists on the issue. Again, if our technological progress suggests further advancement is inevitable the political disputes and cognitive biases suggest otherwise.

Silver closes the book with a look at hindsight bias (although I don't think he uses that term). In hindsight, people wonder how the Pearl Harbor attack could have been a surprise. The silence in radio transmissions from Japan's carrier fleet should have been the signal in the noise. One definition of "noise" is not randomness, but multiple--too many-- signals, which is the problem with SIGINT. The FBI and NSA are constantly following up on leads they find to be false signals.

The conclusion of the book: Think probabilistically. Move from simplifications and approximations to more precise forecasts and statements when more data is collected.
Go from "investors cannot beat the market" to "most investors cannot beat the stock market relative to their risk and transaction costs. It is hard to tell if any can due to noise in the data." Work to reduce your biases: to say that you have none shows that you have many. "Try and err:" Make a lot of forecasts and evaluate them. "Distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference."

I give the book 4.5 stars out of 5. ( )
  justindtapp | Jun 3, 2015 |
Want to know what I do and/or how I think (generally)? Read this. ( )
  trilliams | May 30, 2015 |
Have you ever wondered how to make good predictions about difficult-to-predict phenomena? Can you handle occasionally dry descriptions of statistical models and predictive strategies? Read this book.

That said, I agreed with this book more often than I enjoyed it. The topic of predictions and the information about them is great, but I found many of the specific examples uninteresting. Baseball, for instance, is meaningless and dull to me. Climate change, weather patterns, and earthquake predictions are not topics that I'd seek out detailed discussions about, but in this context they were alright. The sections dealing with predictions about politics, terrorism, and disease pandemics were much more interesting to me.

( )
  wishanem | Jan 27, 2015 |
Showing 1-5 of 46 (next | show all)
The first thing to note about The Signal and the Noise is that it is modest – not lacking in confidence or pointlessly self-effacing, but calm and honest about the limits to what the author or anyone else can know about what is going to happen next. Across a wide range of subjects about which people make professional predictions – the housing market, the stock market, elections, baseball, the weather, earthquakes, terrorist attacks – Silver argues for a sharper recognition of "the difference between what we know and what we think we know" and recommends a strategy for closing the gap.
added by eereed | editGuardian, Ruth Scurr (Nov 9, 2012)
What Silver is doing here is playing the role of public statistician — bringing simple but powerful empirical methods to bear on a controversial policy question, and making the results accessible to anyone with a high-school level of numeracy. The exercise is not so different in spirit from the way public intellectuals like John Kenneth Galbraith once shaped discussions of economic policy and public figures like Walter Cronkite helped sway opinion on the Vietnam War. Except that their authority was based to varying degrees on their establishment credentials, whereas Silver’s derives from his data savvy in the age of the stats nerd.
added by eereed | editNew York Times, Noam Scheiber (Nov 2, 2012)
A friend who was a pioneer in the computer games business used to marvel at how her company handled its projections of costs and revenue. “We performed exhaustive calculations, analyses and revisions,” she would tell me. “And we somehow always ended with numbers that justified our hiring the people and producing the games we had wanted to all along.” Those forecasts rarely proved accurate, but as long as the games were reasonably profitable, she said, you’d keep your job and get to create more unfounded projections for the next endeavor.......
added by marq | editNew York Times, LEONARD MLODINOW (Oct 23, 2012)
In the course of this entertaining popularization of a subject that scares many people off, the signal of Silver’s own thesis tends to get a bit lost in the noise of storytelling. The asides and digressions are sometimes delightful, as in a chapter about the author’s brief adventures as a professional poker player, and sometimes annoying, as in some half-baked musings on the politics of climate change. But they distract from Silver’s core point: For all that modern technology has enhanced our computational abilities, there are still an awful lot of ways for predictions to go wrong thanks to bad incentives and bad methods.
added by eereed | editSlate, Matthew Yglesias (Oct 5, 2012)
Mr. Silver reminds us that we live in an era of "Big Data," with "2.5 quintillion bytes" generated each day. But he strongly disagrees with the view that the sheer volume of data will make predicting easier. "Numbers don't speak for themselves," he notes. In fact, we imbue numbers with meaning, depending on our approach. We often find patterns that are simply random noise, and many of our predictions fail: "Unless we become aware of the biases we introduce, the returns to additional information may be minimal—or diminishing." The trick is to extract the correct signal from the noisy data. "The signal is the truth," Mr. Silver writes. "The noise is the distraction."

» Add other authors

Author nameRoleType of authorWork?Status
Nate Silverprimary authorall editionsconfirmed
Dewey, AmandaDesignersecondary authorsome editionsconfirmed
You must log in to edit Common Knowledge data.
For more help see the Common Knowledge help page.
Series (with order)
Canonical title
Original title
Alternative titles
Original publication date
Important places
Important events
Related movies
Awards and honors
To Mom and Dad
First words

This is a book about information, technology, and scientific progress.

It was October 23, 2008.
Last words
(Click to show. Warning: May contain spoilers.)
(Click to show. Warning: May contain spoilers.)
Disambiguation notice
Publisher's editors
Publisher series
Original language

References to this work on external resources.

Wikipedia in English


Book description
Haiku summary

Amazon.com Amazon.com Review (ISBN 159420411X, Hardcover)

Amazon Best Books of the Month, September 2012: People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver's brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Data meets human nature. Baseball, weather forecasting, earthquake prediction, economics, and polling: In all of these areas, Silver finds predictions gone bad thanks to biases, vested interests, and overconfidence. But he also shows where sophisticated forecasters have gotten it right (and occasionally been ignored to boot). In today's metrics-saturated world, Silver's book is a timely and readable reminder that statistics are only as good as the people who wield them. --Darryl Campbell

(retrieved from Amazon Thu, 12 Mar 2015 18:14:18 -0400)

Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair's breadth, and became a national sensation as a blogger. Drawing on his own groundbreaking work, Silver examines the world of prediction.

(summary from another edition)

» see all 2 descriptions

Quick Links

Swap Ebooks Audio
1 avail.
455 wanted
4 pay

Popular covers


Average: (3.93)
1 1
1.5 1
2 13
2.5 1
3 56
3.5 24
4 129
4.5 23
5 67

Penguin Australia

2 editions of this book were published by Penguin Australia.

Editions: 0141975652, 1846147735

Is this you?

Become a LibraryThing Author.


Help/FAQs | About | Privacy/Terms | Blog | Store | Contact | LibraryThing.com | APIs | WikiThing | Common Knowledge | Legacy Libraries | Early Reviewers | 103,195,662 books! | Top bar: Always visible