HomeGroupsTalkExplore
Search Site
This site uses cookies to deliver our services, improve performance, for analytics, and (if not signed in) for advertising. By using LibraryThing you acknowledge that you have read and understand our Terms of Service and Privacy Policy. Your use of the site and services is subject to these policies and terms.

Results from Google Books

Click on a thumbnail to go to Google Books.

Loading...

The Signal and the Noise: Why So Many Predictions Fail — but Some… (2012)

by Nate Silver

Other authors: See the other authors section.

MembersReviewsPopularityAverage ratingMentions
2,920804,150 (3.85)33
Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair's breadth, and became a national sensation as a blogger. Drawing on his own groundbreaking work, Silver examines the world of prediction.
  1. 20
    Thinking, Fast and Slow by Daniel Kahneman (BenTreat)
    BenTreat: Integrates some of the analytical techniques Silver describes with common irrational patterns of decision-making; Kahneman's book explains how to use some of Silver's techniques (and other tools) to avoid making decisions which are not in one's own best interest.… (more)
Loading...

Sign up for LibraryThing to find out whether you'll like this book.

No current Talk conversations about this book.

» See also 33 mentions

English (78)  German (1)  Danish (1)  All languages (80)
Showing 1-5 of 78 (next | show all)
The signal and the noise is all about prediction. It starts with the subprime mortgage financial crisis and discusses the combination of perverse incentives and overconfidence that caused the rating services to fail to accurately portray the risks of those securities (primarily the assumption that even with housing prices astronomically high, the risk of default of each individual mortgage was completely independent rather than affected by the economy). Next he looks at television pundits and the fact that more television appearances is negatively correlated to forecast accuracy. Here he gives a solid introduction to Philip Tetlock’s work on forecasting, which can be found in more depth in his book Superforecasting. He touches on baseball, an information-rich environment, before moving on to irreducibly complex problems like the weather, seismic activity, and the economy where you fundamentally can’t get anywhere near enough raw data or information on interactions between data points to paint a complete picture.

The second half moves towards giving you an idea how to approach problems probabilistically and how to improve and refine your process over time. He starts with simple problems like sports and poker before moving onto more complex problems like terrorism and global warming.


I wouldn’t consider this book a complete guide to rational, evidence based decision making (ignoring that it doesn’t give you the math), but it’s a pretty accessible introduction to the topic and is largely technically sound. It’s a solid place to start. ( )
  jdm9970 | Jan 26, 2023 |
Liked it a lot, more than I was expecting to. I knew Silver was expert at election and sports forecasting, but he clearly has a wide breadth of knowledge about stats and prediction in general. Good story teller, although the poker chapter went a little longer then I needed. One of those book where it's fun to look at the footnotes afterwards. ( )
  steve02476 | Jan 3, 2023 |
So much more to offer than just strategies of big data and prediction models, also offers how to interpret and react to those predictions as the ignorant receiver. Would highly recommend ( )
  martialalex92 | Dec 10, 2022 |
Good overall, like so many books of this sort, it could have probably been shorter. But Silver is a good writer and interesting. There's a lot of highlightable walkaways. ( )
  oranje | Oct 13, 2022 |
Although this book is structured as a series of case studies, at its heart, it is a philosophy of how we know the world and how we can know it better. All of our knowledge of the world comes through forecasting, sometimes formal but more often not. We make predictions, see how they turn out, and then make more predictions. Ideally, we get better at this process over time. However, as humans, we are systematically bad at this. The chapters go in to many reasons why. The key takeaway, is that if we want to be better at making predictions, we need to make predictions in a structured way and analyze if and why they succeed or fail.

Our models have optimistic assumptions. In dynamic systems, these assumptions are often such that if (when) they are violated, the model is dramatically off, not just a little off. This error of modelling is illustrated using the subprime mortgage crisis that led to the 2008 recession.

Our predictions are worse when we see the world through the lens of one big, overarching idea. We generally are better at making predictions and have a more accurate picture of the world when we adjust our models and predictions frequently in response to new data. We do better when we think probabilistically instead of in binaries. This is illustrated by the inaccuracy of most political predictions. Political pundits tend to see the world through ruling narratives which blinds them to the nuance of reality.

Formal models can be powerfully predictive, but the best predictions will be those that combine the output of models with human judgment -- when that human judgment can be separated from the biases that we have when interpreting data. This is illustrated by baseball, where modelling led to huge successes in finding players... until everyone was using models, in which case, thoughtful combination of models and judgment started winning out.

A key element of getting better at forecasting is determining how to evaluate predictions. Since predictions are inherently probabilistic, we cannot just judge them based on their accuracy. A prediction that is right for the wrong reasons is worse than one that is wrong for the right reasons. When evaluating forecasts, we should consider their accuracy (how often are they right) and their honesty (was the forecast the best that the forecaster was capable at the time)? In the long run, evaluating honesty is likely to yield more improvement than evaluating accuracy. Silver illustrates this in the domain of weather forecasting.

Our ability to model well depends on our ability to determine what data matters. When the data is noisy, we may overfit it, leading to models which look good on paper but which do not have predictive power. They do not separate the signal from the noise. This is illustrated in the domain of earthquake predictions.

Forecasts are often presented as certain, without confidence intervals to show the range of likely outcomes. Even when confidence intervals are given, they tend to be too narrow when compared to actual outcomes. This is especially true when the data itself is complex, noisy, and evolving. This is illustrated in the realm of economic predictions.

Models build upon purely statistical predictions by trying to draw some causal connections between the input and the output. A model can provide deeper insights into the predictions that it makes, such as generating ideas for how to prevent the spread of disease (the example domain in this chapter). However, it is important to understand the limitations of a model and the context it was designed for. A model which predicts how a disease will spread through a whole population on average does not give insight for how it will spread through communities within that population.

We can get better at predictions overtime, and we can come to have beliefs that are more true over time. As long as we do not start from a position of absolute certainty that something is true or false, even false beliefs can eventually be corrected as long as we are honest with ourselves about updating the weight we give our beliefs as we get new data. This is formalized via Bayes' formula, which is a fairly simple formula where beliefs are updated based on the weighted probability that the observed data would occur if the belief were true and if it were false. This is explored via gambling.

Computers have vast computational power which allows them to evaluate possibilities more quickly and more accurately than humans (and with less emotional attachment to certain possibilities). Humans are better at looking at problems holistically and picking out patterns. These complementary skills imply that it is unlikely that computers or humans alone will make the best judgment. Instead, they can supplement each other, with computer models informing human judgment and teaching humans to be better. This is illustrated in the domain of chess programs.

Some domains are highly probabilistic. These domains teach us that having the right process is more important than getting the right answer. In fact, in probabilistic domains, a forecaster is guaranteed to be wrong often, so being too results oriented can prevent them from getting better. Instead, a good decision making process will be what helps a forecaster do better in the long run. Another attribute of highly probabilistic domains is that often a few key skills can have a large impact. Further work gives a much smaller edge, but an edge which is critical in domains where better prediction skill determines success. This is illustrated with poker.

Prediction markets, including the stock market, are a way to try to predict the true probability of an outcome. Aggregate predictions are generally better than individual predictions. Even the best forecasters tend to only be sometimes better than the aggregate. This is captured in the efficient-market hypothesis: the idea that you cannot beat the market. However, a stronger variant of the efficient-market hypothesis is likely false. It is hard to beat markets, but that doesn't mean that they are always right. Aggregated predictions are still based on individual predictions, and when those are flawed, the aggregate will still be wrong (but less wrong). Common sources of inaccuracy in markets are overconfidence, herding (linking predictions that shouldn't be linked), and short-term incentives. One thing to note, however, is that some amount of noise is necessary for a market to work. If everyone made exactly the same predictions, there would not be a market -- there would be no one willing to take your bets.

Sometimes, strong models back predictions. If these models are based in well understood principles (especially if they are based in the physical sciences), then they can provide ways to make predictions that are outside the realm of observed data. However, we need to be aware of the various sources of uncertainty: there is the uncertainty of the model itself, the uncertainty about initial conditions (the data fed into the model), and temporal uncertainty (predictions are generally more uncertain the further they are in the future, especially since we do not know what the future will hold). The strength of models and the importance of understanding the uncertainties they introduce is illustrated in the realm of climate change.

Often, the problem isn't not having a signal, it's not being able to find the signal among the many available signals. When this happens, it is easy to ignore important signals which do not fit into our preconceived notions of how the world can behave. This is especially true since we tend to assume that that which is unfamiliar is also unprobable. We also tend to think that something is impossible if we've never seen it before. However, if something is a larger version of something we see regularly at common scales, we should never assume it is impossible. This is illustrated in the realm of terror attacks which, like earthquakes, tend to follow a power law distribution.

The key takeaways across all of these examples:
- Think probabilistically
- Clearly set your prior probabilities
- Make predictions and improve
- Realize that we tend to believe that we are better at predictions than we really are ( )
  eri_kars | Jul 10, 2022 |
Showing 1-5 of 78 (next | show all)
The first thing to note about The Signal and the Noise is that it is modest – not lacking in confidence or pointlessly self-effacing, but calm and honest about the limits to what the author or anyone else can know about what is going to happen next. Across a wide range of subjects about which people make professional predictions – the housing market, the stock market, elections, baseball, the weather, earthquakes, terrorist attacks – Silver argues for a sharper recognition of "the difference between what we know and what we think we know" and recommends a strategy for closing the gap.
added by eereed | editGuardian, Ruth Scurr (Nov 9, 2012)
 
What Silver is doing here is playing the role of public statistician — bringing simple but powerful empirical methods to bear on a controversial policy question, and making the results accessible to anyone with a high-school level of numeracy. The exercise is not so different in spirit from the way public intellectuals like John Kenneth Galbraith once shaped discussions of economic policy and public figures like Walter Cronkite helped sway opinion on the Vietnam War. Except that their authority was based to varying degrees on their establishment credentials, whereas Silver’s derives from his data savvy in the age of the stats nerd.
added by eereed | editNew York Times, Noam Scheiber (Nov 2, 2012)
 
A friend who was a pioneer in the computer games business used to marvel at how her company handled its projections of costs and revenue. “We performed exhaustive calculations, analyses and revisions,” she would tell me. “And we somehow always ended with numbers that justified our hiring the people and producing the games we had wanted to all along.” Those forecasts rarely proved accurate, but as long as the games were reasonably profitable, she said, you’d keep your job and get to create more unfounded projections for the next endeavor.......
added by marq | editNew York Times, LEONARD MLODINOW (Oct 23, 2012)
 
In the course of this entertaining popularization of a subject that scares many people off, the signal of Silver’s own thesis tends to get a bit lost in the noise of storytelling. The asides and digressions are sometimes delightful, as in a chapter about the author’s brief adventures as a professional poker player, and sometimes annoying, as in some half-baked musings on the politics of climate change. But they distract from Silver’s core point: For all that modern technology has enhanced our computational abilities, there are still an awful lot of ways for predictions to go wrong thanks to bad incentives and bad methods.
added by eereed | editSlate, Matthew Yglesias (Oct 5, 2012)
 
Mr. Silver reminds us that we live in an era of "Big Data," with "2.5 quintillion bytes" generated each day. But he strongly disagrees with the view that the sheer volume of data will make predicting easier. "Numbers don't speak for themselves," he notes. In fact, we imbue numbers with meaning, depending on our approach. We often find patterns that are simply random noise, and many of our predictions fail: "Unless we become aware of the biases we introduce, the returns to additional information may be minimal—or diminishing." The trick is to extract the correct signal from the noisy data. "The signal is the truth," Mr. Silver writes. "The noise is the distraction."
 

» Add other authors (12 possible)

Author nameRoleType of authorWork?Status
Nate Silverprimary authorall editionscalculated
Chamberlain, MikeNarratorsecondary authorsome editionsconfirmed
Dewey, AmandaDesignersecondary authorsome editionsconfirmed
You must log in to edit Common Knowledge data.
For more help see the Common Knowledge help page.
Canonical title
Original title
Alternative titles
Original publication date
People/Characters
Important places
Important events
Related movies
Awards and honors
Epigraph
Dedication
To Mom and Dad
First words
Introduction

This is a book about information, technology, and scientific progress.
1
A CATASTROPHIC FAILURE
OF PREDICTION


It was October 23, 2008.
Quotations
Last words
(Click to show. Warning: May contain spoilers.)
(Click to show. Warning: May contain spoilers.)
Disambiguation notice
Publisher's editors
Blurbers
Original language
Canonical DDC/MDS
Canonical LCC
Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair's breadth, and became a national sensation as a blogger. Drawing on his own groundbreaking work, Silver examines the world of prediction.

No library descriptions found.

Book description
"Nate Silver's The Signal and the Noise is The Soul of a New Machine for the 21st century." —Rachel Maddow, author of Drift

Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. He solidified his standing as the nation's foremost political forecaster with his near perfect prediction of the 2012 election. Silver is the founder and editor in chief of FiveThirtyEight.com.

Drawing on his own groundbreaking work, Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.

In keeping with his own aim to seek truth from data, Silver visits the most successful forecasters in a range of areas, from hurricanes to baseball, from the poker table to the stock market, from Capitol Hill to the NBA. He explains and evaluates how these forecasters think and what bonds they share. What lies behind their success? Are they good—or just lucky? What patterns have they unraveled? And are their forecasts really right? He explores unanticipated commonalities and exposes unexpected juxtapositions. And sometimes, it is not so much how good a prediction is in an absolute sense that matters but how good it is relative to the competition. In other cases, prediction is still a very rudimentary—and dangerous—science.

Silver observes that the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.

With everything from the health of the global economy to our ability to fight terrorism dependent on the quality of our predictions, Nate Silver’s insights are an essential read.
Haiku summary

Popular covers

Quick Links

Rating

Average: (3.85)
0.5
1 6
1.5 1
2 33
2.5 2
3 124
3.5 31
4 243
4.5 27
5 126

Is this you?

Become a LibraryThing Author.

Penguin Australia

2 editions of this book were published by Penguin Australia.

Editions: 0141975652, 1846147735

 

About | Contact | Privacy/Terms | Help/FAQs | Blog | Store | APIs | TinyCat | Legacy Libraries | Early Reviewers | Common Knowledge | 182,757,967 books! | Top bar: Always visible