Luck vs. Skill Mental Model (Incl Path-Dependency, Process vs. Outcome, Sample Size, Averages, and Why Nihilism Is Dumb, Not Intellectual)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Luck vs. Skill Mental Model: Executive Summary

If you only have three minutes, this introductory section will get you up to speed on the luck mental model, as well as the sub-models of path-dependency, sample size / averages, process vs. outcome, and the stupidity of nihilism/determinism masquerading as intellectualism.

DEADPOOL: "Luck's certainly not very cinematic!" DOMINO: "Yes it is."
DEADPOOL: “Luck’s certainly not very cinematic!” DOMINO: “Yes it is.”

The concept in one sentence: the impact of luck – also known as “randomness” in some contexts – can be difficult to separate from skill.

Key takeaways/applications: many people have trouble taking a probabilistic approach to luck, either underattributing its role and importance in the world (as with many businesspeople), or overattributing its role and importance in the world (as with many anti-businesspeople, and some prominent nihilists).  Understanding how luck works – and doesn’t – allows us to better interpret data and make more rational decisions.

Three brief examples of luck:

Sanford who?  You know who the late Aubrey McClendon was, right?  So why don’t you know the name of Sanford Dvorin? Because McClendon got lucky and Dvorin didn’t.  Thanks to path-dependency, luck can add up to have massive impacts over time: it’s why Evan Spiegel is a billionaire and many talented entrepreneurs with ideas far superior to Snapchat are broke.  

Dvorin had the crazy – but right – idea that there might be oil and gas reserves under suburban Coppell, just north of the Dallas-Ft. Worth International Airport, a stone’s throw from where I live.  

He turned out to be right, but he was just a few years too early, and couldn’t scrape together enough funding to keep going – so he was forced out of the business just before the going got good.  In fact, as Gregory Zuckerman does a wonderful job of expounding on in The Frackers (Frk review + notes), poor Sanford Dvorin at one point:

“Had leased five thousand acres in the Barnett at an average of $50/acre.  Less than a decade later, the same acreage would sell for $22,000 an acre, or $110 million.”  

That’s a good chunk of, as Buffett might call it, “walking-around money.”  Sadly, Dvorin never saw a dime.  There’s plenty more exploration of luck in “The Frackers.”

What do you get when you average Bill Gates and a homeless camp?  When separating skill and luck, it’s important to understand the importance of sample size and the peril of averages: as we’ll explore, in heterogeneous populations where there are different “clusters” of subgroups, taking a group-wide average can hinder rather than accelerate learning.  This is also one of the reasons that some bestselling, supposedly “intellectual” authors are totally and completely wrong and not worth listening to – as we’ll explore.

I freeclimbed a probably-dangerous cliff and didn’t die, but it made for a great photo op so I’ll do it again!  This is a dumb thing I actually did years ago (pictured at right).  It’s also completely the wrong way to think about things. Although this concept is touched on in the probabilistic thinking mental model as well, it’s critically important for doctors, businesspeople, and parents to avoid the faulty “all’s well that ends well” mentality, and evaluate our decisions based on their likely outcome over many iterations, rather than over one single iteration, which can be disproportionately influenced by luck.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of the inversionschema, or margin of safety mental models, or our reviews of great books like “Deadly Choices” (VAX review + notes), “ Pour Your Heart Into It” (PYH review + notes), or “The Pleasure of Finding Things Out” (PFTO review + notes).

Luck Mental Model: A Deeper Look:

“All I ever wanted was:
for us to beat the odds.  

I thought we were lucky ones,
but all your luck is gone.”

– Yellowcard, “A Vicious Kind”

(from “Southern Air“)

Luck is a particularly challenging topic for many people to understand because it’s very personal: we’ve all been in situations we don’t feel like we deserve – where we did everything right and yet ended up with an outcome that can only be described as “shitty” – in which case, knowing that it was just bad luck can be comforting.

On the other hand, most of us have things we’re proud of in our lives: our careers, our families, our athletic accomplishments – and it can feel pretty demeaning, emasculating even, to be told that some of that may be due to good fortune rather than our smarts, effort, and dashing good looks.

Of course, taking a  fundamental attribution error approach – “good outcome it’s skill, bad outcome it’s luck” – is a profoundly nonsensical and irrational way to approach the world.  What we’ll aim to do here is summarize some of the important take-homes on luck from a variety of disparate sources.

Michael Mauboussin’s Two-Jar Model Of Luck and Skill

The best conceptualization of skill and luck that I’ve ever seen comes from Michael Mauboussin’s The Success Equation (TSE review + notes), which aims to elucidate the role of luck and skill in areas as diverse as sports and business.

One of the key takeaways from Mauboussin’s wonderful book is the “two-jar” model of luck and skill.  I’m not going to do a great job of summarizing it, and you’re welcome to skip this section if you’ve read his book already (and if you haven’t, stop here and order it now before reading on.)

For mental simplicity, assume that your “outcome” is the sum of numbers on two balls drawn from jars labeled “skill” and “luck.”

In our example, let’s say our “skill” is constant – perhaps a 5.  Let’s further say that it’s a low-luck activity – like playing a piece on the piano.  There aren’t a lot of exogenous factors that cause variation: the keys are always in the same place; the temperature of the room and comfort of the piano bench are almost always the same, etc.

Ask Mauboussin about how a trash can got him his job.
Ask Mauboussin about how a trash can got him his job.

So Mauboussin’s “luck” jar probably contains balls with pretty small numbers: say, -1 (negative one) to +1 (positive one).  Our total outcomes are thus either 4, 5, or 6 for an individual drawing.  Given a large enough sample size (which we’ll get to), our results should average out to our skill over time.

On the other hand, imagine another activity where luck plays a much bigger role – for example, a card game where we could start with a great hand or a terrible hand.  In this case, maybe the luck draws exceed the skill draws: from -10 (negative 10) to +10 (plus 10).  So our actual results could be anywhere from -5 (negative 5) to +15 (positive 15).  Again, given a large enough sample size, results will average out to our skill level over time, but there’s a lot more variance in individual outcomes.

Luck does indeed even out over time if all events are independent: that is to say, if they are like coin flips, where the chance of your next heads does not depend whatsoever on previous flips.  There’s a concept called “reversion to the mean” which you may have heard of and, if not, will encounter in a number of books discussed on this site.  

It’s critically important to understand that reversion to the mean does not apply to causal relationships: for example, the population of the United States will not “revert to the mean” and go back to what it was decades ago, nor will our income “revert to the mean” over the course of our lifetime if we keep gaining skills and becoming more experienced and in-demand than we used to be.  Nor will stock market valuations or corporate profit margins revert to historical levels if there are structural changes (such as interest rates or capital efficiency) that have caused a change.

As a final point, it’s worth noting that “luck” and “randomness” are basically the same thing: variation around an average result.  As discussed in Meredith Wadman’s The Vaccine Race – TVR review + notes – this sort of variation is even observed on a cellular level:

“Cells, like people, vary in their vigorousness.  Some cells divide more sluggishly, while some are eager, rapid replicators.  So over a given period of time, some cells will replicate fewer times than others […]

which means that the only conclusion that can be drawn when the floors of the two bottles are eventually covered with cells is that the initial population in the mother bottle has doubled in size.

[SP: as opposed to meaning that every cell has divided once.]

Independent vs. Dependent Events: Path-Dependency, and Why Luck Matters (x Social Proof x Feedback x N-Order Impacts x Complexity x Margin of Safety)

I tell my father’s story of the gambler. One day he heard about a race with only one horse in it, so he bet the rent money. Halfway around the track, the horse jumped over the fence and ran away. - Howard Marks Click To Tweet

(from The Most Important Thing, Illuminated” (MIT review + notes))

“Okay,” I hear some of y’all asking.  “So if luck averages out over the long run, why does it matter?”

Well, it matters because the real world obviously contains elements not disussed in our little two jar model. 

As Michael Mauboussin explains on page 16 of The Success Equation (TSE review + notes), despite luck evening out over the long term,

“the observation doesn’t hold for every individual, and the timing of luck can have a large cumulative effect.”  

Mauboussin is referencing the reality that the world is usually comprised of dependent rather than independent events: a lot of things depend on previous events. 

For example, paying your rent depended on that rent money you just bet.  You have no more money.  You are in trouble.  You do not get an infinite number of equally-sized bets for luck to “even out” and get you your money back.  (Pro tip: you don’t have to, and in fact shouldn’t try, to make it back where you lost it.)

Similarly: if you’re an executive who makes a good decision and yet experiences seriously bad luck that results in the company losing money, you’re often fired, as Richard Thaler explores on pages 188 – 190 of Misbehaving (M review + notes).  You don’t get another chance to have good luck to even out your bad luck – at least not at that company.

As another example, this time from biology: in both birds and humans, as explored respectively by Jennifer Ackerman’s The Genius of Birds (Bird review + notes) and Ian Leslie’s “Curious” (C review + notes), a lack of resources during critical periods of brain development in childhood can cause permanent and irreversible changes (a  bottleneck of sorts.)

That is to say: you could (if you were heartless) malnourish a baby bird and give it all the food it wants later, but it still won’t be able to sing properly.

That phenomenon also applies to human sleep: research reviewed by Dr. Matthew Walker in Why We Sleep (Sleep review + notes) makes it absolutely clear that we can partially, but never fully, “catch up” on sleep, particularly for applications like  memory and learning – once the opportunity’s gone, it’s gone.

Similarly, various mechanisms combine to perpetuate luck in non-biological systems over time – such as social proof feedback, and n-order impacts.  Systems interact with themselves.  For example, Megan McArdle’s The Up Side of Down (UpD review + notes) – which we’ll return to again later in this model – overviews how graduating into a recession can permanently reduce your career earnings – and how being laid off (through no fault of your own) can have a similar impact on your career prospects.

Similarly, there’s a fancy term called “preferential attachment,” which basically amounts to “the rich get richer.”  McArdle, as well as Mauboussin, cite some phenomenal examples of how a lot of popular movies, books, and songs are popular because they’re popular.

The pages of Mauboussin’s The Success Equation (TSE review + notes) describing the MusicLab experiment are some of the most important in any book I’ve read: basically, researchers, in a controlled lab experiment, found that participants’ ratings of songs were heavily influenced by social proof, such that it was impossible to predict ex-ante which songs would be popular.  The ones that “got lucky” early often ended up rocketing up the charts.

This is what’s known as path-dependency: in many cases, all roads don’t lead to Rome.  Some roads, Apple-Maps style, lead to you driving your car off a pier into the Pacific Ocean.

Imagine two equally talented entrepreneurs with equally good ideas.  One starts his business in 1996, with multiple years to take advantage of easy VC funding.  Another starts his business in 1999, right as the internet boom is about to go bust, and there’ll be very limited funding for many years to come… which one do you think has a better chance of surviving?

That’s exactly what happened to poor Sanford Dvorin, in a sense.  It’s a common theme in many businesses – as Phil Rosenzweig explains in his wonderful (and underread) The Halo Effect (Halo review + notes):

By extension, to recognize that good decisions don’t always lead to favorable outcomes, that unfavorable outcomes are not always the result of mistakes, and therefore to resist the natural tendency to make attributions based solely on outcomes.  

And finally, to acknowledge that luck often plays a role in company success. Successful companies aren’t ‘just lucky’ – high performance is not purely random – but good fortune does play a role, and sometimes a pivotal one.

I also discuss The Halo Effect in the  storytelling model, where I touch on counterfactuals.

I’m not going to get too deep into complexity theory here – first, because I mostly don’t know what I’m talking about (yet), and second, because it’s a whole separate topic – but you may have heard the phrase “a butterfly flapping its wings in Brazil can cause a hurricane in Miami.”

This concept is explored in a number of books, ranging from Nate Silver’s The Signal and the Noise (SigN review + notes) to Geoffrey West’s Scale (Scale review + notes) to Laurence Gonzales’s Deep Survival (DpSv review + notes), the last of which we’ll touch on later.

The point, however, is that in complex systems like the weather and the business landscape, small changes in initial conditions – such as those that can be created by luck – can lead to meaningfully different outcomes.

Take this passage from Charlie Munger’s Poor Charlie’s Almanack (PCA review + notes), where he’s explaining how he and Warren Buffett never figured out why some industries (like cereal manufacturers) consolidate into rational oligopolies, while others (like airlines) don’t:

Many markets get down to two or three big competitors – or five or six – and in some of those markets, nobody makes any money to speak of.  But in others, everybody does very well.

Over the years, we’ve tried to figure out why the competition in some markets gets sort of rational from the investor’s point of view so that the shareholders do well, while in other markets there’s destructive competition that destroys shareholder wealth [like in the airline industry].

… maybe the cereal makers, by and large, have learned to be less crazy about fighting for market share – because if you get even one person who’s hell-bent on gaining market share… for example, if I were Kellogg and I decided that I had to have sixty percent of the market, I think I could take most of the profit out of cereals.  I’d ruin Kellogg in the process. But I think I could do it.

In some businesses, the participants behave like a demented Kellogg.  In other businesses, they don’t. Unfortunately, I do not have a perfect model for predicting how that’s going to happen.

Munger cites brands as one potential contributing factor, but I’m not sure that’s all there is to it.

Achor, in "The Happiness Advantage" - THA review + notes - cites research finding that 'lucky' people are more likely to see things because they’re looking out for them. Laurence Gonzales finds similar results among survivors in "Deep Survival" (DpSv review + notes).
Achor, in “The Happiness Advantage” – THA review + notes – notes ‘lucky’ people are more likely to see opportunities because they’re looking out for them. Laurence Gonzales finds similar results among survivors in “Deep Survival” (DpSv review + notes).

I think path-dependency also plays a large role: for whatever reason, some industries end up on a path that leads to more competition and lower margins, while others don’t.

What’s the takeaway?  “Be lucky” isn’t, obviously, helpful, since (by definition) there’s not much we can do to improve our luck other than remaining open to experience (see sidebar to right.)

One lesson is simply to develop a sense of equanimity toward the role that luck plays in our lives.  One great example comes from Brad Stone’s The Upstarts (TUS review + notes). Naval Ravikant, a really smart guy (see this awesome interview here) let Travis Kalanick of Uber use AngelList to email 165 prospective investors in June 2010.  Ravikant himself “begged” and eventually got to put in $25K (now worth… a lot). On the topic of luck, the always-wise Ravikant:

I’ve made peace with the fact that Silicon Valley is so random.  You have to make peace with it or otherwise you’ll never get a good night’s sleep in this town.”  

The Upstarts (TUS review + notes) contains, as you might imagine, other great examples of luck: the almost-Ubers, the not-quite-AirBnBs.

Beyond just taking a deep breath, there is, however, an  inversion and margin of safety angle here: we might not be able to get lucky, but we can certainly try to make sure that bad luck doesn’t kill us.

Munger’s aforementioned “Poor Charlie’s Alamanack” (PCA review + notes) contains a reference to Johnny Carson’s famous graduation speech about how to guarantee a life of misery.

Munger touches on some path-dependency elements here: if you become addicted to alcohol, drugs, or anything else, it’s very hard to get away from that path.  The same goes for lesser bad habits – see Charles Duhigg’s “ The Power of Habit ( PoH review + notes).

The same applies in businesses with things like debt / leverage (Long-Term Capital Management being a great example.)  If things go too badly, you don’t get a second chance.

Clayton Christensen’s classic The Innovator’s Dilemma (InD review + notes) even comes to some conclusions that touch on the intersection of luck, path-dependency, and margin of safety:

“guessing the right strategy at the outset isn’t nearly as important to success as conserving enough resources… so that new business initiatives get a second or third stab at getting it right.  

Those that run out of resources or credibility before they can iterate toward a viable strategy are the ones that fail.”  

See the  survivorship bias section in the  inversion model for more thoughts on this topic, including why a lot of entrepreneur stories teach the wrong lessons.

Application/impact:  besides developing a sense of equanimity toward the role of luck, we should make sure that we keep our eyes open for opportunities, and avoid putting ourselves in situations where bad luck would kill us.

Luck x Probabilistic Thinking x Feedback x Scientific Thinking x CausalityBase RatesSample SizeProcess vs. Outcome, And Why Improbable Things Are Probable

Here’s where it starts to get fun.  (Well, even more fun. I was having fun already, but I’m weird like that.)

Did you know that you can run statistical tests that will tell you that if a group of people you think are humans contains an albino, that group of people isn’t really comprised of humans after all?

It’s an observationally absurd but, in fact, completely statistically valid result that Jordan Ellenberg discusses thoughtfully (and wittily) in How Not To Be Wrong (HNW review + notes).  It is extremely improbable that anyone will be born an albino, but that doesn’t mean that albinos don’t exist – Ellenberg thus points out that “reductio ad unlikely” isn’t always a great line of argumentation.

Ellenberg touches on many of the other concepts we’ll cover here, including sample size. (Some of his other applications, such as survivorship bias, are touched on in the “inversion” mental model.)  

One important, mathematically-derived conclusion is that the larger the sample size, the more likely results are driven by skill rather than luck.  For example, small counties are likely to have both the highest and lowest cancer rates; small schools are likely to have both the highest and lowest GPA performance.  It’s easy to see why: think back to the quip about Bill Gates and the homeless camp earlier.  If two kids in a ten-person school happen to be geniuses, the average result will be skewed pretty meaningfully.  We’ll come back to averages later.

Many professional athletes have breakout games – or seasons – that are never replicated, yet thanks to the overconfidence of GMs, they often reap a big payday from it.  (GMs are clearly overconfident about draft picks, as Richard Thaler discusses beautifully in “ Misbehaving – M review + notes).

On the other hand, many great athletes go through occasional slumps that shouldn’t be overinterpreted, either.

The problem is that given a big enough sample size (the world), plenty of improbable things will happen.  On page 182 of How Not To Be Wrong (HNW review + notes), Ellenberg drives this home with an anecdote from Richard Feynman, who comments (tongue-in-cheek) how amazingly improbable it was that he observed a particular, completely random license plate in the parking lot. 

Feynman’s right: of the millions of cars that could’ve been there that day, he happened to see that specific one.  How very improbable. (More of Feynman’s witty – and insightful – thoughts are available in The Pleasure of Finding Things Out – PFTo review + notes).

But, obviously, plenty of people (in aggregate) win the lottery, and plenty of people (in aggregate) contract rare diseases.  That doesn’t mean we should play the lottery, or encase ourselves in a spacesuit every time we leave the house.

The extreme example of “improbable things are probable”: one man, Park Ranger Roy Sullivan, was struck by lightning seven times.  That is so improbable that the odds of it happening, per this fascinating story, are:

4.15 in 100,000,000,000,000,000,000,000,000,000,000.

That is a lot of zeroes.  That is 100 nonillion.  What’s a nonillion?  Great question.  It’s the one that comes after octillion.  Which comes after septillion, sextillion, quintillion, quadrillion, and trillion.  It is a number that is so big that it boggles the mind.  It is – according to Wolfram Alpha – the number you get when you take 10 quadrillion, and you square it.  10 quadrillion, by the way, is 10 million billions.

That was an extremely improbable event.  And yet it happened.  Given the number of people in the world, highly improbable things happen to lots of people, every day.

Perhaps even more horrifying: Sam Kean’s The Violinist’s Thumb – TVT review + notes – tells the story of Tsutomu Yamaguchi, a Japanese man who had the misfortune of being present for both the Hiroshima and Nagasaki atomic bomb detonations during World War II.

It’s difficult to take away a lesson from these outcomes: don’t live in a city?  Don’t ever leave your house?  Obviously those aren’t the right lessons to learn.

How, then, do we translate a given result into appropriate feedback for modifying our decision?  Most smart professionals across a variety of disciplines focus on process vs. outcome.  Given the large sample size of everyone else who does what we do, there’s usually a base rate – even if not a quantitative, precise one – about what leads to success, and what doesn’t.

For example, the aforementioned Megan McArdle makes a brilliant point about this with regards to handwashing in The Up Side of Down (UpD review + notes):

“As I discovered when I myself had to spend ten days administering IV antibiotics at home, the reason that handwashing is so hard to do consistently is that it’s not actually that risky to forgo it.  The odds that any one slip will cause an infection are extremely low, well under 1 percent.

And since it’s tedious and often must be done multiple times while touching a single patient, it’s very tempting to skip it sometimes.  Over thousands of repetitions, this kills people.

But most of us don’t judge our actions over thousands of repetitions.”

But we should judge our actions, in cases like these, over thousands of repetitions.  Otherwise, we fall into the trap mentioned by Laurence Gonzales in Deep Survival (DpSv review + notes):

“The word ‘experienced’ often refers to someone who’s gotten away with doing the wrong thing more frequently than you have.”

That is to say, I may be an “experienced” (novice) cliff climber, because I’ve done it 2-3 times in my life on various hikes.  I survived each time – but it’s probably likely that, given enough iterations, I’d severely injure myself.  So I’m learning the wrong lesson from a good outcome; I should instead be focused on the process.

McArdle goes on, in The Up Side of Down (UpD review + notes), to explore how following a process (rather than focusing on outcomes) makes salespeople more successful.  She also does an amazing job of discussing the Hawaiian parole system (which I touch on in the feedback mental model).  It’s one of my favorite sections of any book anywhere and it’s worth the entire boo just for that.  The takeaway?

McArdle notes how hard it is to learn from a sequence that looks like:

“nothing… nothing… nothing… nothing… nothing… bam!  Five year prison term.”

What’s the solution?  It’s the idea of focusing on process vs. outcome.  Doctors know well that the process of washing hands saves lives, even if it doesn’t look that way from any individual outcome.  Similarly, investors and business managers know that sticking to circle of competence and avoiding overconfidence is a process that results in a higher base rate of successful investing, regardless of the lessons taught by any one outcome.

Going back to the aforementioned The Halo Effect by Phil Rosenzweig – Halo review + notes – here’s a great quote on the topic from former Goldman risk arb guy Robert Rubin:

“If even a large and painful loss doesn’t necessarily mean a bad decision, then what does?  To answer that question, we have to get beyond the halo effect. We have to take a close look at the decision process itself, setting aside the eventual outcome.  

Had the right information been gathered, or had some important data been overlooked? Were the assumptions reasonable, or had they been flawed? Were calculations accurate, or had there been errors?  Had the full set of eventualities been identified and their impact estimated? Had Goldman Sachs’ overall risk portfolio been properly considered?

This sort of rigorous analysis, with outcomes separated from inputs, isn’t natural to many people.  It requires an extra mental step, judging actions on their merits rather than simply making ex post facto attributions, be they favorable or unfavorable. It may not be an easy task, but it’s essential.”

I can’t recommend The Halo Effect (Halo review + notes) highly enough.  A fuller understanding requires you to think about the probabilistic thinking / storytelling mental models as well – go check them out, and think about those lessons while reading Rosenzweig’s book.

How can you apply that in a practical way?  See also the  inversion model – specifically the portion about survivorship bias and entrepreneurs maxing out their credit cards.  As I discuss there, the only people who get to write books are the ones who made it big – if you maxed out your credit cards and your business idea failed (which is the base rate), you never show up in the spotlight.

Rosenzweig’s wonderful book changed the way I think about rags-to-riches business stories: you have to look for the “missing planes” (as Ellenberg explains in How Not To Be Wrong – HNW review + notes).  Otherwise, like the WWII army officers Ellenberg mentions, you’ll come to the wrong conclusions about the right strategy.

Application / impact: relying on a larger sample size enables us to reduce some of the “noise” of luck and find the “signal” of skill.  So does focusing on validated processes that lead to good results in aggregate, and not worrying too much about individual outcomes.

Luck Utility x Scientific Thinking: A Brief Detour on Averages

There is no such thing as the average person... average a left-hander with a right-hander and what do you get? - Don Norman Click To Tweet

Don Norman, in The Design of Everyday Things (DOET review + notes) – my second-favorite book of all time – echoes points made by Jordan Ellenberg in the aforementioned “How Not To Be Wrong” (HNW review + notes).

When we start to frame “luck” more broadly as “variation around an average,” we realize that there’s no reason that the variation has to conform around one average.  Ellenberg notes, for example, that there’s no such thing as “public opinion” – whether in elections or with regard to specific policies, the averaged preference of everyone actually makes nobody happy.

Similar sad outcomes can be found elsewhere.  Discussing bad predictions, Nate Silver’s “ Signal and the Noise” (SigN review + notes) observes that the commonality behind the failed predictions is that the situation was out-of-sample: the predictions were being made upon data that had no validity.

Silver’s example is that if you’re a good driver but have never driven drunk, it doesn’t matter what your previous driving record was if you get behind the wheel after twelve vodka tonics.  

The relevant data is clustered – and the “average” base rate of your driving skill while sober is completely irrelevant when you’re drunk (or drowsy, as Dr. Matthew Walker notes in chilling detail in Why We Sleep –Sleep review + notes).

To a similar point: I’m not a big fan of Malcolm Gladwell in general, but credit where credit’s due.  I have always loved Gladwell’s story about there not being a perfect “tomato sauce.” He discusses that in this TED Talk; some of the summarized conclusions in case you don’t have time to watch now:

“Howard contronted the notion of the platonic dish… for the longest time in the food industry, there was a sense that there was one way – a perfect way – to make a dish… they had a platonic notion of what tomato sauce was…

we thought that if we gave them the culturally authentic tomato sauce, then they would embrace it, and that’s what would please the maximum number of people… people were looking for cooking universals: one way to treat all of us.

… what is the great revolution of science over the last 10 – 15 years?  It’s the movement from the search for universals to the understanding of variability.  Now, in medical science, we don’t want to know just how cancer works – we want to know how your cancer is different from my cancer.

… if I were to ask you to come up with a type of coffee that would make all of you happy… the average score would be about 60 / 100.  If I broke you into 3-4 clusters, your scores would go from 60 – 75… it’s the difference between coffee that makes you wince and coffee that makes you deliriously happy.”

And, finally, to segue into our next section, here’s a quote from a PAA favorite author, Shawn Achor.  In his own famous TED Talk, Achor notes, after presenting the chart below:

This chart doesn't even mean anything. It's fake data.
This chart doesn’t even mean anything. It’s fake data.

The fact that there’s one weird red dot up above the curve… means there’s one weirdo in the room.  You know who you are. I saw you earlier.

… I can just delete that dot because it’s a measurement error.  It’s messing up my data. One of the very first things we teach people is… how, in a statistically valid way, to eliminate the weirdos.

… but if I’m interested in potential… we’re creating the cult of the average with science.  If I ask how fast can a child learn to read in a classroom, scientists change the answer to how fast does the average child learn to read in the classroom.

… what I posit, and what positive psychology posits, is that if we study what is merely average, we will remain merely average.  Instead of deleting those positive outliers, what I intentionally do is come into a population like this one and say why.  Why are so many of you so far above the curve? 

… instead of deleting you, what I want to do is study you.

Application / impact: don’t get trapped in the cult of the average.  Recognize that there can be more than one average, and sometimes the most interesting data point is the one that doesn’t fit with the rest.

Luck x Utility x Dose-Dependency

If you expect an indefinite future ruled by randomness, you’ll give up on trying to master it. - Peter Thiel Click To Tweet

Thinking about luck is important.  It’s critically important.

But taking it too far – using it to explain everything, and diminishing rather than elevating the level and importance of our skill – is the wrong way to use it.

While I do somewhat disagree with Peter Thiel on his presentation of luck in Zero to One (Z21 review + notes), I nonetheless agree with his real core takeaways – which are to respond to it with a sense of agency

Want proof?  Gonzales’s aforementioned Deep Survival” (DpSv review + notes) explores how humans have managed to survive crazy situations like being stranded at sea on a tattered liferaft with no food or water.

Nando Parrado’s Miracle in the Andes relays how some plane crash survivors made climbs in the Andes with makeshift gear – climbs that would have been deemed crazy and impossible even with proper gear.  

Achor’s aforementioned The Happiness Advantage (THA review + notes), as well as Before Happiness (BH review + notes), contain techniques that have been scientifically proven to help even those suffering from advanced, incurable diseases like multiple sclerosis (MS).

Thiel’s argument in a nutshell:

“[Previous generations didn’t] pretend that misfortune didn’t exist, but [believed] in making their own luck by working hard.  

If you believe your life is mainly a matter of chance, why read this book?

This is exactly right (with modest caveats in the willpower mental model.)  And yet the sort of nihilism implied by the latter statement is, unfortunately, often found among so-called “intellectuals” – which is why Thiel’s taking aim at it.

Without naming names, one such lauded intellectual – who, as best I can tell, has nothing much of value to add to any conversation, anywhere – gets a lot of airtime and praise in the financial world for reasons that remain inexplicable.  His espoused worldview, once you cut through hundreds of pages of self-congratulatory aren’t-I-so-smart-and-clever narcissism (an amusing example of overconfidence, given the subject matter and the nature of complexity), essentially boils down to the nihilistic, paradoxically deterministic “the future is random and there’s nothing we can do about it.”  (If that’s not his philosophy, then he’s not a good enough writer to make it clear to me, and plenty of others: “that way lies madness.”)

And that couldn’t be further from the truth.  If you somehow, unfortunately, find yourself in possession of such a book, you should stick it in the nearest fireplace or industrial-strength paper shredder, and instead buy a book by someone who is actually capable of coming to useful conclusions about how to respond to an uncertain world.

Someone like Nate Silver, whose exploration of data analysis in The Signal and the Noise (SigN review + notes) is excellent, or Megan McArdle, the author of the insightful The Up Side of Down (UpD review + notes), one of the more thoughtful and broad-ranging analyses I’ve ever read.

The best response I’ve seen to this dumb strain of nihilistic nonsense masquerading as intellectualism comes from Philip Tetlock’s phenomenal Superforecasting (SF review + notes).  Tetlock, although giving the author far too much credit, still makes the fantastic point:

“History is not just about black swans […] slow, incremental change can be profoundly important […]

a point often overlooked is that […] antifragility is costly […] why not prepare for an alien invasion? […] the answers hinge on probabilities […]

judgments like these are unavoidable […] to be sure, in the big scheme of things, human foresight is puny, but it is nothing to sniff at when you live on that puny human scale.”

Similarly, in reviewing thousands of years of major historical turning points, Pulitzer Prize winning historian John Lewis Gaddis – in his 300 page masterpiece On Grand Strategy (OGS review + notes) – frequently touches on complexity / emergence and how it’s impossible to anticipate all the things that could go wrong.  That doesn’t mean you should give up: Gaddis explains,

“sensing possibilities… is better than having no sense at all of what to expect.” 

Indeed, those who didn’t try to anticipate the future were the leaders who lost.  For example, in ancient Rome, Octavian “retained a purpose and acted accordingly.”  

This allowed him to best Marc Antony, who, per Gaddis, “when he acted at all, reacted.  It was no longer much of a contest.”

I’ll leave you with one quote that inspired Jeff Bezos, as told by Brad Stone in the wonderful The Everything Store (TES review + notes):

'It's easier to invent the future than to predict it. - Alan Kay Click To Tweet

Application / impact: don’t accept the odds – use them to help you make appropriate decisions whenever you have the choice, but if you have no other choice?  Then ignore ‘em, and beat ‘em.

Further Reading on Luck vs. Skill, Path-Dependency, Process vs. Outcome

Probabilistic thinking is probably your next best step as far as mental models go;  base ratesfeedback, and  salience are worth a look too.

As far as books go:

– How Not To Be Wrong by Jordan Ellenberg – HNW review + notes – spends a lot of time on the ideas of luck and randomness, including some great exploration of lotteries that you could actually win.

– The Success Equation by Michael Mauboussin – TSE review + notes – provides a great “two-jar” metaphor, and a really interesting  social proof interaction leading to path dependency.

– The Halo Effect by Phil Rosenzweig – Halo review + notes – does a phenomenal job of exploring process vs. outcome,  storytelling, and other models.

The aforementioned The Up Side of Down by Megan McArdle – UpD review + notes – goes into this too, in various contexts, as does Laurence Gonzales’s Deep Survival (DpSv review + notes).

– The Frackers by Gregory Zuckerman – Frk review + notes – is the best exploration of luck in a practical business context that I’ve found.  Set over multiple decades, it explores how some people hit paydirt (oil) during the fracking boom – and how others came away with little to nothing.

– Last but not least, Zero to One by Peter Thiel – Z21 review + notes – doesn’t totally get it right, but is directionally much more useful and rational  on the topic of randomness and luck than the nonsense spouted by some prominent nihilists masquerading as intellectuals.