Think Like a Bayesian

common misconceptions about probability
Share Button

Part of the problem with prediction and probability is that a lot of people don’t understand the true nature of probability. The most common misconception about probability is the problem with “causation.” This occurs when a coin is flipped and lands on heads several times in a row, so it seems that the probability that the next flip would result in tails would be higher. In fact, each flip is completely independent of the previous and all previous flips, with the exact same 50/50 probability for each flip. Over time, the odds that a coin would flip to heads for 10, 20, 50 or 100 flips decreases to a vanishingly small probability, but the probability of each flip remains 50/50. This misconception manifests in many, more significant ways. Consider the way meteorologists, governments and homeowners assess risk to “100-year” or “500-year” catastrophic events such as flooding, hurricanes or earthquakes. It is extremely common to see victims of these events talking to journalists about how the latest storm was “not supposed” to happen because it was a 100-year storm and a similar storm had just passed through five years prior. This is an example of a probabilistic statement being misinterpreted in linear terms with devastating consequences. The correct way to interpret “100-year” events is that each event is likely completely independent–or at least highly independent–of any other so each year there is a 1 in 100 chance of a major storm.

Probability is perhaps the most common calculation employed by the human mind. Happening thousands of times a day, either consciously or unconsciously, probability is the mechanism of decision making. Baseball players, executives, parents and Uber drivers are constantly making probabilistic decisions—whether they are playing in or out, juggling their kid’s commitments or trying to maximize earnings relative to passenger pickups. The problem with probability is that humans are prone to biases of all kinds.

Unconscious estimates of probability often rely on incorrect samples, outsized risks associated with extraordinarily low-risk events (think shark attacks, terrorism, volcanic eruptions) and myriad other factors. People often complain about inaccuracies in weather reporting, when in fact meteorology is a triumph of probabilistic accuracy. Modern weather forecasting can predict daily temperatures with extraordinary accuracy. Successful projection of the trajectory of tropical storms and hurricanes has saved tens of thousands of lives. Compare these successes to earthquake prediction, or economic prediction, which doesn’t even come close. Probability theorists and statisticians spend their days trying to build scientific and mathematical tools to both measure probabilities accurately and to make accurate predictions. One of the most useful of these tools was published by an English minister by the name of Thomas Bayes more than 250 years ago.

Thomas Bayes was born in the early 1700s and came to be associated with a branch of statistics called “Bayesian Probability” or “Bayesian Statistics.” What Bayes did that was so revolutionary was to create a simple algebraic theorem that could be used to accurately predict nearly any future outcome by considering only three variables—the probability of the event happening previously, the probability of the event happening again and the probability of the event not happening again. The power of Bayesian probability is that this calculation can be made again and again, always considering new information to revise the prediction’s accuracy. Early in a Bayesian probability test, the three variables would essentially be guesses. But since the Bayesian theorem consults prior probability, these guesses are continuously revised, each time with more informed and accurate guesses. The outcome of the formula is a simple probability, also called a Bayesian Inference. Over time, the accuracy of any Bayesian Inference will approach but never reach 100 percent.

“Folk numeracy is our natural tendency to misperceive and miscalculate probabilities, to think anecdotally instead of statistically, and to focus on and remember short-term trends and small-number runs. We notice a short stretch of cool days and ignore the long-term global-warming trend. We note with consternation the recent downturn in the housing and stock markets, forgetting the half-century upward-pointing trend line. Sawtooth data trend lines, in fact, are exemplary of folk numeracy: our senses are geared to focus on each tooth’s up or down angle, whereas the overall direction of the blade is nearly invisible to us.” Michael Shermer Scientific American.

Why is all of this so important? Nate Silver wrote in “The Signal and the Noise” that “[w]e face danger whenever information growth outpaces our understanding of how to process it.” There is such a thing as too much data, too much information, too much “noise.” We see security and fraud prevention teams faced with this challenge every day. Everything is alerting, everything is a potential threat. How do you make a “reasonable” judgement on what to look at first, what’s the most likely, most credible threat?

We have many of the tools that we need to make sense of the massive amounts of data available to us to make objective decisions. But in many cases, we attempt to leverage probabilistic tools within deterministic frameworks. As such, we often find teams forced to make decisions based on human biases, rather than true probability. As it applies to fraud prevention, we are hard at work designing and integrating new probabilistic tools into our technology. We feel strongly that these techniques will blunt that “danger” of too many events, too much data, too much “noise” that Mr. Silver writes about. We plan to revisit this topic in a series of future posts.

 

Leave a Reply

Your email address will not be published. Required fields are marked *