Tom Breur

21 October 2018

It has long been known that the general public is sometimes remarkably out of tune with math and numbers. In 1988 mathematician John Allan Paulos wrote a classic “Innumeracy” that is chockful of striking examples of misinterpretation of numeric evidence. Paulos refers to innumeracy as “… inability to deal comfortably with the fundamental notions of number and chance …” Personally, I consider it the mathematical equivalent to illiteracy. Another classic from Paulos is “A Mathematician Reads the Newspaper” (1995) which contains a lot of satire, debunking ridiculous claims in the press. It highlights more spectacular examples of innumeracy.

Paulos illustrates innumeracy with lighthearted anecdotes and many common, everyday scenarios. These examples highlight how readers might be fooled by misleading quantitative evidence. His examples span diverse topics like probability and coincidence, misguessing extremely small or very large numbers, pseudoscience and superstition, and how all of these interact with psychological bias. Taken from real life, Paulos’ brain teasers illustrate frequently occurring guesses, and why so many peoples’ estimates can be way off. In particular the mathematical explanations of biases are enlightening to avoid making similar mistakes yourself. Anything but tedious and dry; instead, inspiring and witty. I highly recommend it to anyone, regardless of whether you’re interested in statistics, or not.

As Paulos shows, it turns out that coincidences are much more common than intuition would lead you to believe. One day I was walking down Rue Honoré de Faubourg in Paris, when I bumped into an old friend of my daughter (from the Netherlands). I mean: what are the odds?!? How remarkable and what a pleasure. But what if I told you, while we were catching up, that she had chosen a career as a fashion model? Does that make our meeting less unlikely? Equally unlikely? And more importantly, why? These are examples of the kinds of questions Paulos goes into, and that he explains with delightful clarity. [The answer is that –if you accept Paris as a fashion hub– this additional piece of information makes this chance occurrence far *less* unlikely]

For many students, Statistics is a notorious hurdle on their way to graduation. As a friend of mine jokingly remarked: “I took Stats three times in college, and *not* because I liked it so much!” It’s not often that people flaunt their lack of skills, but innumeracy seems the exception to that rule. Suggesting there is a contradiction, people may refer to themselves as a “people person instead of a numbers person”, again seemingly justifying their inability to grasp numbers. Yet anybody can learn to master “numbers” to a surprisingly high degree, it’s a skill like others, you can conquer it if you choose to practice.

Many supposed “math” problems really are a matter of “framing” the problem correctly. Problem analysis, surfacing hypotheses, and determining how best to confront your data with “micro theories” is in part art, and part science. The “art” relies heavily on soft skills, which they really should have labelled the *hard* skills, because that is what they are – as my esteemed colleague Michael Mahlberg has pointed out. The “science” requires a thorough and broad grounding in quantitative methods. When a remarkable claim makes you wonder: “Hmmm, that’s odd!”, it is very convenient to have the knowledge available to quickly confront a claim with the appropriate statistical tests. If you hear “sales are slow this quarter”, wouldn’t you want to know how likely that is something to really worry about, or whether it might be statistically random fluctuation with 30% likelihood? That is where a firm grounding in statistics and methodology pays off in spades.

One of the reasons why statistics may be so challenging for many people is that the concept of “probability” is fairly new, historically. In his seminal book “Against the Gods” (pdf) from 1996, Peter L. Bernstein (1919-2009) points out how this plays into lay people’s inability to reason about odds and risk. In his bestseller Bernstein (an American financial historian) gives a remarkable account of the historical developments that gave rise to our contemporary notions of probability and risk. It was only after our modern civilization managed to understand and control risk that business endeavors could grow to the scale we have seen in the 20^{th} century.

For instance, it is commonly understood that the reason why the Netherlands grew to international dominance (their “Golden Age”, in the 17^{th} century) was because they “invented” a precursor to what we would nowadays call the Limited Liability Corporation (LLC) called the Vereenigde Oostindische Compagnie (VOC). This enabled Amsterdam to gain dominance as an international trading powerhouse. The practice of “sharing” risk enabled trading ventures at an unprecedented scale, yet at the same time managing the downside risk in case of an unfortunate loss of cargo and lives. It was an early example of spreading calculated risk across a business portfolio. From the same era stems the Tulip mania, one of the first documented market crashes. This episode brought home that risk can be controlled (to some extent), and that insight into the financial dynamics and risk is of eminent business value to survive.

It has always puzzled me how Blaise Pascal (1623-1662), an eminent mathematician, could have struggled so badly with concepts of odds and probabilities for problems we nowadays would consider “simple.” He was so far ahead of his time, yet seemingly straightforward probability calculations eluded him. One of the problems Pascal worked on, triggered by his friend Chevalier de Méré, was determining how odds changed in a game that was played at the time: throw heads or tails seven times, and whoever gets to four the first wins. Now suppose one person has already won the first toss, how does this affect his probability of getting to four, first? Together with Pierre de Fermat, Blaise’s musings laid the ground work for contemporary probability theory.

Another great title is “Simple Heuristics That Make Us Smart” (1999) by Gerd Gigerenzer and Peter Todd. Because the authors recognize that a large part of the challenge with numbers and statistics lies in working with *odds*, they provide some great ‘tools’ to help us think through statistical problems. A spectacular and super helpful example is calculating Bayesian odds, that I will try and explain in the following paragraph. Many people struggle with these, at least contemporaneously. The trick I picked up from Gigerenzer & Todd is that when you ‘convert’ such problems to natural frequencies (“raw counts”) instead of odds, seemingly vexing problems become eminently tractable. A keeper!

Let’s try an example of Bayesian inference in Gigerenzer & Todd’s style. Let’s suppose you aren’t feeling very well, decide to go see your doctor, and he administers a test that is known to be 95% accurate. Unfortunately for you, the test comes out positive (medics are cryptic sometimes…) – how worried should you be? What are the odds you suffer from this condition? Many people would say: “95%!”, but the “real” answer is: “it depends.” It depends on the prevalence of this disease in the population, and if there are any other aspects that associate you with the condition. Forget the second part for now, let’s focus on the disease prevalence. Why does that matter? Can you determine by how much?

This is an applied problem of Bayesian inference. If this disease occurs once in every thousand citizens, you should be more worried than if this disease happens only once in a million. Why? Let’s do the math. We’ll use Gigerenzer & Todd’s approach of calculating with natural frequencies, but the exact same calculation can be done with odds using Bayes Theorem for the interested reader. If in a city of 20.000 people there are 20 people who carry your disease (1:1000 prevalence), then of those 20, you would expect 19 positive tests, and 1 negative. After all, the test was 95% accurate. However, for the remaining 19980 citizens, you would also expect the same test accuracy. Therefore, 999 (5%) would test positive, and 18981 would test negative. So in this city, given these population prevalence odds (!), there will be 19+999=1018 positive test outcomes, and the other 18.982 people would test negative. But *if* you were a random draw (hence the importance of “other aspects associated with this condition”), a positive test still “only” gives you a 19/999=~1.9% odds of being ill. Considerably lower than the 95% many people would have guessed!

In his seminal book “Statistics as Principled Argument” (1995) Abelson refers to the challenge of statistics being not so much the computational mechanics, as the deeper understanding of knowing which ones to use and why, and appreciating how to interpret computer outputs. He cites a famous study (D.H. Atlas, 1978) that mentioned 73.4 years as the life expectancy for famous orchestral conductors. In isolation this means nothing, the meaning of that number arises from *context*. In this case the appropriate group to compare to. Would that be “average” for males, non-famous conductors, orchestral *players*? Without the “right” reference group, the number 73.4 in itself is meaningless. And Abelson argues that people are often intimidated by statistics out of fear that propagandists will use any number they need to make their (invalid) point. Similar in many ways to Huff’s classic “How to lie with Statistics” – although you can hardly blame “statistics”, as little as you cannot blame *the English language* if someone chooses to lie…

Reblogged this on Boris Gorelik and commented:

Innumeracy is “inability to deal comfortably with the fundamental notions of number and chance”.

I which there was a better term for “innumeracy”, a term that would reflect the importance of analyzing risks, uncertainty, and chance. Unfortunately, I can’t find such a term. Nevertheless, the problem is huge. In this long post, Tom Breur reviews many important aspects of “numeracy”.

LikeLike

[…] can draw from this movie. It underscores how general managers suffer from innumeracy (see also a previous bog post), roughly proportionate to the incidence in the general population. Even managers of quant […]

LikeLike

[…] Data has always been an important part of our lives as sentient creatures. With the widespread digitization of society, broad availability of ever more data sources and Moore’s Law driving down the cost of storage and compute power, it will only become more important. Quite naturally, journalists are going to be using data ore often, and their audience will (hopefully) become more critical consumers of data (I expressed my skepticism about this recently when talking about “Innumeracy”). […]

LikeLike