Search Home My Account Subscribe Shop Log Out Sep/Oct 2008 Enlarge cover Back Issues Search Forum Calendar Links Books Interview Nassim Nicholas Taleb Nassim Nicholas Taleb has had a run-away success with The Black Swan, a book about surprise run-away successes. Constantine Sandis talks with him about knowledge and scepticism. CS: “The fox knows many things, but the hedgehog knows one big thing,” the Greek poet Archilochus once wrote. Isaiah Berlin famously used this saying to introduce a distinction between two different kinds of thinkers: those who “pursue many ends, often unrelated and even contradictory,” and those who relate everything to a “single central vision… a single, universal, organizing principle in terms of which alone all that they are and say has significance.” Nassim, I think you would see yourself as an intellectual hedgehog. I have heard you describe your big idea, which you have had from childhood, as the view that the more improbable an outcome is, the higher its impact (and vice versa). Many controversial corollaries follow from this one claim, and you have recorded some of the most important in numerous essays, as well as in your books Fooled by Randomness and The Black Swan . These thoughts range from remarks on the perils of socioeconomic forecasting, to views on epistemology, agnosticism, explanation and understanding, the dynamics of historical events, and perhaps most importantly, advice on where to live and which parties to attend. As I find myself in the awkward position of wishing to endorse almost all of the corollaries while fiercely disagreeing with the large idea from which they stem, I would like to ask you whether you really think that all improbable events have a high impact, and that all high impact events are improbable, or whether this is simply a misleading way of describing your insight? Are you happy to allow for counterexamples? NNT: My core idea is about the effect of non-observables in real life. My focus is on the errors which result: how the way we should act is affected by things we canʼt observe, and how we can make decisions when we donʼt have all the relevant information. My idea concerns decision-making under conditions of uncertainty, dealing with incomplete information, and living in a world that has a more complicated ecology than we tend to think. This question about justifiable responses to the unknown goes way beyond the conventionally-phrased ʻproblem of inductionʼ and what people call ʻscepticismʼ. These classifications are a little too tidy for real life. Alas, the texture of real life is more sophisticated and more demanding than analytical philosophy can apparently handle. The point is to avoid ʻbeing the turkeyʼ. [ There is a philosophical parable about a turkey who on the basis of daily observations concludes that heʼs always fed at 9am. On Christmas Eve he discovers this was an overhasty generalisation – Ed. ] To do so you have to stand some concepts on their head – like your concept of the use of beliefs in decision making. Let me explain this. First, when you are making a decision, you donʼt act on your ʻraw beliefsʼ about the situation in question, or on what probabilists call ʻdegrees of beliefsʼ. Instead you base your decision on something that involves possible consequences as you see them. For instance, you can believe with a high degree of certainty, say 99%, that a plant is not poisonous, yet refuse to eat it; or that the plane won ʼt crash – yet refuse to ride in it. Yet you may accept a bet with a mere 0.1% degree of belief that you will win. Perception of impact , that is, of consequences, is everything in decision-making – not likelihood of truth . I insist, likelihood of truth is too crude a notion, rarely useable. And potential impact is vastly easier to figure out than the probability of outcome, for mathematical reasons related to sample insufficiency. So I focus only on the high impact domains I call ʻExtremistanʼ, in which a single event can be massively consequential. Beware beliefs. Although you may not believe with any conviction that you are in Extremistan, you might still act as if you were there, if the consequence of an event might be too large. I will repeat until I die: beliefs have little to do with what we do in real life. The problem of induction also deals with non-observables. It asks, How can you generalize from particular observations of the world to theories that you can use to make predictions? But having spent my life taking decisions in a highly random environment – I was a Wall Street trader – I felt alienated from modern philosophical treatments of induction, which I dub ʻMickey Mouseʼ owing to their highly naïve attributes. The typical formulation of the problem of induction is attributed to Hume, although incidentally, it is much, much older. Hume got it from Huet, Foucher, Bayle, and a collection of sophisticated and forgotten French thinkers, who got it wholesale from earlier sources, including Algazel. Hume said that induction presupposes that nature behaves in a uniform fashion, but that this belief has no defence in reason – it just reflects our mental habits resulting from our experiences so far. But having reached this sceptical position, Hume started thinking of the problem as ridiculous. He left it in the philosophical cabinet, as nothing to do with real life, making it something ʻacademicʼ, in the bad sense of the word. Modern philosophers calls such ivory tower theorising ʻthe problem of insulationʼ – in The Black Swan I present this as the problem of ʻdomain dependenceʼ. The problem is that what academics do in front of a blackboard has little bearing on what they do in real life. And, I insist, it is real life that matters to me: Iʼm interested in the ecology of uncertainty, not induction and deduction. However, a certain class of sceptics took the problem of uncertainty into vastly more operational territory. They were the medical doctors of the Empirical sect to which Sextus Empiricus was supposed to belong. For them scepticism was a very practical problem, unrelated to sterile Humean scepticism, and also unrelated to the Pyrrhonians, who took their scepticism to absurd extremes. Finally, let me answer your question. I have two points of disagreement with you. Firstly, I do not consider myself a hedgehog, but a fox: I warn against focusing (ʻanchoringʼ) on a single possible rare event. Rather, be prepared for the fact that the next large surprise, technological or historical, will not resemble what you have in mind (big surprises are what some people call ʻunknown unknownsʼ). In other words, learn to be abstract, and think in second order effects rather than being anecdotal – which I show to be against human nature. And crucially, rare events in Extremistan are more consequential by their very nature: the once-every-hundred-year flood is more damaging than the 10 year one, and less frequent. You have fewer billionaires than millionaires. You get far fewer wars that kill more than 20 million people than wars that kill a few thousand. There are far fewer bestselling authors than authors. So, empirically, the rate of occurrence of events tends to decline with their impact. Problems of Knowledge Mediocristan and Extremistan