This is one of the best books I've read in a long time and I'm sure I'll be referring to it for many years. It appeals to my natural skepticism, my sense that experts are often wrong and that you shouldn't believe so-called authorities just because they're in charge. The author, Nassim Nicholas Taleb, is a Wharton grad (though a few years before my time).
The basic idea is that our knowledge is divided into observations that fit one of two different worlds. The first he calls Mediocristan, the terrain of the ordinary, the part of the world that conforms to the bell curve. It answers to statistics and knowable probabilities.
The other, Extremistan and its power-law statistics, is quite different. Nothing really changes with scale. The super-wealthy, for example, are orders of magnitude richer than the rest of us; but even among themselves there are exponentially-different levels of wealth. This is far different than the Mediocristan world of, say, height or weight.
The trouble is that the modern world is Extremistan, not the Mediocristan where we humans are well-evolved to understand. In Extremistan, the long tail of observations is thick with outliers, and the seemingly wildly unlikely event is more common than our experience with Mediocristan would indicate. Using Gaussian techniques in a non-Gaussian world, or equilibrium techniques in an (unknown) non-equilibrium world, will lead you to make errors.
In Extremistan ... systems are chaotic, having many variables and/or high degrees of interdependence. Its participants' success are determined by cumulative advantage, and variables change in geometric and/or exponential progression. Uncertainty in these domains often entails "unknown unknowns." Anyone called an "expert" in this field is largely a good bluffer or rhetorician and little better at prediction in these domains than computer models based on single-point, just-prior performance. Some examples include stockbrokers, clinical psychologists, psychiatry, college admissions officers, court judges, and personnel selectors.
Humans are good at working with Mediocristan systems. Extremistan, on the other hand, confounds us. The difficulty arises from several psychological factors.
- Confirmation bias: People seek largely to confirm what they know, i.e. to confirm their model, rather than refute it.
- Silent evidence: Even when looking at the facts, what must be taken into account are the facts that never were but might have been.
- Narrative fallacy: People prefer stories over data, even if the story version is misleading or wrong. This is because stories are easier to store and recall.
- Attraction to platonic simplicity: People prefer the reduced, and simple when reality is rarely so.
- Ludic fallacy: People mistake the (predictable, constrained) model for the real thing, and very often base plans in the world as if it was a simple model.
The most serious effect of our ineptitude with Extremistan is our inability to make predictions in these systems. In such cases, we are subject to being completely caught unawares by factors outside of our expectations and models. He calls such surprises "Black Swans"
Much of the above summary is my paraphrase of an excellent online review, which I post here to help me remember the main points later. But what do I really think?
Well, I apply the same skepticism to his own arguments and there my disappointment is that he cannot accept there are circumstances where statistical reasoning is useful. For example, he explains Microsoft's success over Apple as being a lucky accident, but a careful reader of the history will understand that ease-of-use is not the only way to win in computer operating systems. In other words, from the outside it may appear that the market picked the "wrong" one thanks to bad luck, but in reality there was much less "luck" than it appears. Microsoft "won" by being best at the overall set of things important in operating systems.
I wonder if he's read much of the literature on path dependence: how some ideas, once they catch on, are hard to dislodge and result in sometimes inferior long-term outcomes. People use QWERTY as the best-known example, or VHS vs. Beta -- how supposedly "better" technologies can lose because once something gets started the cost of switching is too great. But the path dependence idea has been well-refuted, I think, by Margolis etc who showed that in fact QWERTY stays with us because it's actually pretty good, and Beta's quality wasn't the kind of advantage that Sony's PR machine wants you to believe. In other words, more often than not, competition among many ideas does bring the best one to the top.
Still, the bottom line is that Taleb is more right than he is wrong, in spite of a few incorrect digressions (like his silly made-up example of Yevgenia Nikolayevna Krasnova). Get ready for me to start quoting this idea regularly.