Inversion Mental Model (Incl Selection Bias, Survivorship Bias)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Inversion Mental Model: Executive Summary

If you only have three minutes, this summary will get you up to speed on the inversion mental model.

Inversion in one sentence: thinking backward as well as forward can be a powerful tool that allows us to add vantage points and think more rationallyscientifically, and probabilistically.

Key applications / takeaways: in addition to solving problems more effectively, inversion helps us see counterfactuals and n-order impacts in areas like survivorship and selection bias (funtions of sample size), thus avoiding overconfidence in our decisions.

Two brief examples of inversion:

If there’s only one other W. W. in town, then my brother-in-law’s a meth kingpin.  Sometimes the absence of evidence can be as powerful an indicator as the presence of evidence.  

How did the U.S. nuclear program – the Manhattan Project – become known to Mother Russia? Richard Rhodes explains in “ The Making of the Atomic Bomb” ( TMAB review + notes):

“Soviet physicists realized in 1940 that the U.S. must also be pursuing a [nuclear fission] program when the names of prominent physicists, chemists, metallurgists[,] and mathematicians disappeared from international journals: secrecy itself gave the secret away.

As a book about some hardcore science and engineering, TMAB is full of other examples of inversion and scientific thinking.

If X is bigger than Y, then Y is smaller than X.  This sounds like a stupid mathematical tautology, but since we’re humans, not econsframing things a different way can be powerful.  Geoffrey West discusses “ scaling laws” in “ Scale ( SCALE review + notes), and most of them work as well if you flip the X and Y axes.

West highlights, for example, how engineer Isambard Brunel helped paved the way for modern ships in the early 1800s: Brunel realized that cargo ships’ capacity scales with volume, whereas the drag force scales with the square of its dimensions (a 3/2 power law).

So, the required engine capacity scales sublinearly relative to size… or, via inversion, bigger ships are more efficient.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of theschemamemory, or Bayesian reasoning mental models, or our reviews of great books like “ Before Happiness” ( BH review + notes), “ Mistakes were Made (but not by me)” ( MwM review + notes), or “ Rust: The Longest War” ( Rust review + notes)

Inversion Mental Model: A Deeper Look

In “ To Engineer is Human ( TEIH review + notes), civil engineer Henry Petroski discusses how inversion is used subconsciously by many families in their decision-making process.

Petroski explains that if you’re going on a vacation, narrowing down an infinite list of choices might be impossible – but if Sally is afraid of flying and John absolutely hates hiking, maybe a road trip to the nearest big city is in order.  You’ve already eliminated some of the major categories of vacations – those that require flying, and those that are outdoorsy – leaving far fewer options from which to choose.

Charlie Munger similarly cites Johnny Carson’s famous graduation speech as a great example of inversion.  As I discuss in the  luck mental model, it’s hard to get lucky – but it’s pretty easy to make sure bad luck doesn’t kill you.  If you avoid alcohol and hard drugs, wear your seatbelt, and live within your means, you’ve eliminated many of the potential  path-dependency problems that can turn bad luck terminal, pronto.

Laurence Gonzales makes similar points in “ Deep Survival ( DpSv review + notes) – a phenomenal book about life-and-death situations that explores who lives, who dies, and why.

Gonzales, similarly, uses the idea of inversion, noting that a big part of surviving is simply – wait for it – not dying.  That is to say, avoid stupid decisions – both in the moment and ahead of time, by studying  base rates of mistakes that got other people killed while climbing, boating, or whatever your chosen recreation activity is – and you’ll be ahead of the curve.

This process of thinking backwards can be used in other ways, too.  One of the best ways to illustrate inversion is with a math problem.  No, please, stop screaming – come back here – I’ll let you go home early if you just sit through this one, okay?

Oftentimes, it’s easier to find an answer by subtraction than it is by addition.  For example, what would you do if I asked you to add, without the use of a calculator, 329 and 973?

That’s tough to do in our heads.  Lots of carrying. But what if we rearranged it into a mathematically equivalent version?

329 + 973 = 329 + (1000 – 27)

All of a sudden, a much easier answer pops out: since 973 = 1000 – 27, if we subtract 27 from 329 (easy – 302), then add 1000, we get the answer (1302).

Some of you may, of course, prefer to do the problem another way:

973 + (300 + 30 – 1) = 972 + 300 + 30 = 1272 + 30 = 1302.

That’s inversion: oftentimes it’s easier to solve something backward than it is to solve it forward.

Here’s another example, in the context of geometry.  How would you find the area of the shaded-green shape below?

For those of us who didn’t do MathCounts in middle school, it’s probably not intuitive.

What if I gave you a hint and drew a few lines?

Some of us may see a potential solution pop out.

If you don’t, try thinking about it for 20 – 30 seconds.

If you still don’t, that’s okay.  It’s probably because I suck donkey balls at drawing.  And even if you understood the sketch, this problem is one of those things that seems obvious in hindsight but can be challenging if we haven’t learned the trick yet.

Here’s the solution: first of all, we know that the area of a circle is πr-squared.  (I can’t do superscripts in WordPress – sorry!)

So we know that the area of the whole circle is 36π, and the area of a quarter of the circle is 9π.

And with those lines drawn, it becomes visually evident that the area of a quarter-circle is comprised of the area of a 6×6, right-angle isosceles triangle, and our desired shaded area.  Or, in other words:

[triangle] + [shaded area] = [quarter-circle]

(The brackets are simply mathematical notation for “area of.”)

The area of a triangle, as we may remember, is its base times its height divided by 2 (bh/2), so the area of a triangle with base 6 and height 6 is 36 / 2 = 18.  The shaded green portion is thus the area of the quarter circle minus the area of the triangle – 9π – 18.

Although these sorts of math problems are relatively trivial and unimportant, it’s a bit like the how-many-pairs-of-briefs-are-sold-each-year-in-America question in the disaggregation model.  The answer is stupid and pointless and has no real utility.  But the thought process is important.

Inversion underlies, for example, much of scientific thinking.  While most people engage in storytelling – i.e., non- probabilistic thinking, too quickly seeing causal mechanisms when there might not in fact be any – scientists are cautious and work backwards.

Several phenomenal real-life examples of this comes from Rhodes Schoolar Meredith Wadman’s “ The Vaccine Race ( TVR review + notes).  Scientist Leonard Hayflick was trying to develop an alternative to monkey-kidney cells for growing vaccines, because monkeys were finicky and expensive and their kidneys might harbor a virus called “SV40” that could hitch a ride on the vaccines and potentially cause cancer in humans.

How did Hayflick determine what kind of cells would be most appropriate?  He reasoned backward, as Wadman – a doctor herself – explains perfectly:

Hayflick had reservations about using leftover surgical samples, or even skin samples from volunteers […]

cells from any human being who has been on the planet for any length of time are potentially contaminated with disease-causing viruses […]

 if the ostensibly normal cells became cancerous, he wouldn’t know if this was due to a virus from the fluide or to some hidden virus already residing in the cells.  

However, there was one obvious source of tissue that […] was far more likely to be clean.”  

That is, fetal tissue.  As Wadman explores, Hayflick used a similar process to overturn conventional thinking about the life of cell cultures.  

Previously, scientists errantly believed that all healthy, well-maintained cell cultures would live forever – when, in fact, this only applied to cancerous cell lines; non-cancerous ones had a lifespan, just like their former owners.

Of course, Hayflick didn’t just jump to this conclusions.  Upon observing an unexpected result (that one of his cell cultures was starting to show signs of struggling), Hayflick methodically ruled out potential causes such as dirty glassware, bacterial or viral contamination, etc.

Wadman’s “ The Vaccine Race” TVR review + notes) is great science writing and includes numerous other examples of inversion, scientific thinking, and not-so- scientific thinking.

Application / impact: thinking backward can be a powerful tool to enhance our ability to “add vantage points” ( rationality), think scientifically (avoid  overconfidence), and think probabilistically ( counterfactuals).

Inversion x Sample Size: Survivorship Bias

Examples of inversion are scattered all over the site.  Two specific applications I don’t cover elsewhere are the ideas of “selection bias” and “survivorship bias.”  Both have to do with sample size.

We’ll start with the latter first.  In the engaging “ How Not To Be Wrong ( HNW review + notes), mathematician (and novelist) Jordan Ellenberg  – pictured at left – wastes no time in getting into a useful example.  He notes the scenario of having to armor a plane during a war, which is a tradeoff – too much armor and the planes will be heavier and less nimble; too little armor and, well, bad things will happen.  Where do you put the armor?

Mathematician Abraham Wald, using inversion, came to the opposite answer from WWII army officers.  Reviewing data on the density of bullet holes in various areas of the plane, the officers wanted to put more armor where there were the most bullet holes – which makes sense, right?  Put reinforcement where you’re getting hit. That’s like, war 101.

And it’s totally and completely wrong.  Why? Wald recognized that the data was only for the planes that came back.  If you think about the causal mechanism of how planes sustain damage during dogfights, and from flak cannons on the ground, here was no reason to expect that certain parts of the plane would be hit disproportionately often relative to other parts of the plane.

If you start with the presumption that bullet holes should be evenly distributed, then the areas where you find the least bullet holes need the most armor… because the planes with more holes there didn’t come back.  The planes that came back despite lots of holes in non-critical areas don’t need more armor.

Ellenberg comes back to this “sample too small” problem repeatedly throughout “How Not To BeWrong” ( HNW review + notes); in this context it’s survivorship bias but there are a number of other broader applications.  Ellenberg provides a memorable example – the “Baltimore Stockbroker” problem – which he riffs on throughout the book.

This shows up all over the place if you start looking for it.  For example, many entrepreneur books – including some quoted on this site, like the later-mentioned “ Uncontainable ( UCT review + notes) by Kip Tindell – provide the standard entrepreneurship advice of “drop out of school, don’t worry about money, follow your dream, etc.”

Of course, the people giving that advice end up being millionaires.  So it’s easy, as Phil Rosenzweig might say in “ The Halo Effect ( Halo review + notes), to “connect the winning dots” – if you read enough entrepreneur stories, it seems like many of them left college (or decent jobs) to pursue what seemed like a crazy idea, and they triple-mortgaged their house and begged borrowed and stole from all their friends and maxed out credit cards under their names and their dogs’ names… and now they’re famous, book-writing millionaires.

Don’t you wanna be like them?

Admittedly there’s a bit of hyperbole in my description, but it’s a true enough trope that you’ll see pop up.  The lessons that you’ll take away from a book like Phil Knight’s “Shoe Dog” or Richard Branson’s “Losing My Virginity” is that if you spend your twenties going on spiritual journeys and having a grand old time, everything will work out swimmingly in the end.

And it’s completely and totally wrong.  That is terrible advice.  Nobody should ever do that, statistically speaking, and not just because of the power of  compounding if you get a little money in the bank early.  On top of that, the base rate on the success of entrepreneurs is terrible.  The base rate on the personal finances of people who drop out of college and max out their credit cards to start businesses is also terrible.

But the guy who put all his insurance money into prototyping a “Jump to Conclusions” mat (that guy –>) doesn’t get to write a book.  You don’t get to hear his story.  He’s not going to be world-famous.  He’s not going to get a book deal with translation and movie rights.  He’s going to be broke as sh*t, and you’re never going to hear about him.  

You hear about Evan Spiegel becoming a billionaire off a completely dumb and pointless app (Snapchat) that has no purpose or differentiation whatsoever.  You don’thear about the undoubtedly thousands of other people – probably with infinitely more talent and creativity – who tried to start messaging or social media apps and, unlike Spiegel, have only one “zero” in their bank account balance (with no other digits in front of it.)  Do you remember Formspring?  No, probably not.  What about Keek?

We could go on all day, but we won’t.  The point is that we usually don’t get to see the planes that don’t come home.  They’re not even not salient – they might as well not exist, for all we think about them much of the time.

So it’s a treat when we do get to see them.  That’s why one of my favorite parts of Gregory Zuckerman’s “ The Frackers ( Frk review + notes) is how he avoids this survivorship bias problem by mentioning Sanford Dvorin as a case study of how many people didn’t net anything from the fracking revolution, despite having the right idea.

Aubrey McClendon became a billionaire; Dvorin became nobody, because he ran out of funding – which, as Zuckerman makes clear, could’ve happened to McClendon / Chesapeake at any given point.  When it came to acreage deals, McClendon was like Buffett/Munger with a box of peanut brittle: he never saw one he didn’t like.  He just couldn’t help himself, whether or not he had the money to pay for it.  Sign first, find the capital later.  In fact, Chesapeake co-founder Tom Ward described Chesapeake’s financial situation as:

“near-death on a daily basis”

It’s the exact same story as before: just substitute “maxed out revolving line of credit” for “maxed out credit cards and HELOCs.”  Chesapeake survived near-death for long enough to mint billionaires like McClendon and Ward; Dvorin, unfortunately, didn’t.  But here’s what Dvorin was sitting on before he had to walk away, per Zuckerman:

“five thousand acres in the Barnett at an average of $50/acre.  Less than a decade later, the same acreage would sell for $22,000 an acre, or $110 million.”  

It’s not hard to imagine Dvorin leveraging that sort of equity into a billion dollar company.  Instead, Dvorin’s somewhere, doing whatever non-billionaire former wildcatters do.  Dvorin’s the plane that didn’t come home.  More of our metaphorical planes, of course, look like Sanford Dvorin than Aubrey McClendon. 

This isn’t to say that skill doesn’t exist – certainly, it does.  There’s a lot to learn from entrepreneurs; I cover a lot of entrepreneurship books on this site.

But as I discuss in the  luck vs. skill mental model, the cumulative impacts of luck can be large thanks to  path-dependency, so it’s important to look at the whole sample – and not just the lucky MusicLab winners.  (To understand what I mean by that, go check out the model, or Michael Mauboussin’s “ The Success Equation – TSE review + notes).

Similarly, Brad Stone’s “ The Upstarts ( TUS review + notes) does a good job of examining the almost-Ubers, the not-quite-AirBnBs – that, in an alternative universe, could’ve been billion dollar companies.

But in this one, they aren’t.  This, of course, gets into the counterfactualsand probabilistic thinking mental models, but it’s also an example of inversion: you’re looking out into the world to see what bigger sample size your success story is part of, and deciding if success is the base rate… or just luck.

Application / impact: think about survivorship bias whenever you encounter stories of people seemingly making crazy decisions but it working out fantastically – or stories of people making all the right decisions but perishing tragically.

Inversion x Sample Size x Incentives x N-Order Impacts: Selection Bias

Let’s say you’re a doctor, and your performance is measured – whether for some monetary incentive, or just for your own ego – by the percentage of patients whom you cure, rather than simply the number whom you treat, or the number of procedures you perform.

A lot of people would advocate for this type of system.  Indeed, it’s generally acknowledged that the incentives structure of per-procedure reimbursement doesn’t necessarily align with patient care; as David Oshinsky puts it in medical history “ Bellevue ( BV review + notes) – discussing the origin of Medicare –

“Medicare… provided reimbursement for ‘reasonable costs,’ which, as one student put it, ‘were whatever hospitals and physicians said they were.’

So isn’t it very clearly better to pay doctors based on whether or not their patients actually get better?

Maybe.  But maybe not.  As Charlie Munger puts it on n-order impacts:

“I’m all for fixing social problems…  And I’m all for doing things where, based on a slight preponderance of the evidence, you guess that it’s likely to do more good than harm…

What I’m against is being very confident and feeling that you know for sure that your particular intervention will do more good than harm given that you’re dealing with highly complex systems wherein everything is interacting with everything else.”

via Peter Bevelin’s “Seeking Wisdom” – SW review + notes)

A very real problem with pay-for-performance systems is that they can, in many cases, discourage people from solving hard problems.  So as not to pick on medicine too much, I’ll point to investing – one of the major practices of smart investors is to not solve hard problems when there are easy ones around that pay just as well.  But many of society’s biggest problems are hard problems, and if we pay people only for success and punish them for trying their best but failing, then an unintended consequence – an n-order impact – is that nobody will try to solve hard problems anymore.

This is a very real phenomenon in medicine.  The aforementioned “ Bellevue ( BV review + notes) discusses this: one Bellevue doctor “fumed” that 40% of deaths at Bellevue were attributable to patients arriving in a dying state from other hospitals; a report from a city health official in 1900 found that other hospitals were:

“sending the poor, dying patient to Bellevue in order to lessen their [own] death rates.”  

Indeed, later in the book when Oshinsky covers the AIDS epidemic, it seems like there was meaningful selection bias as well.  Some doctors openly refused to treat AIDS patients because they were afraid of being infected (reasonably, as the transmission mechanism was not fully understood at the time), or simply because they were prejudiced against gay men.  

But even other doctors who weren’t prejudiced actively selected against treating AIDS patients simply because it was emotionally traumatic, given their rapid rate of decline, generally young age, and (at the time) lack of a cure.  Oshinsky cites Dr. Roger Wetherbee, NYU’s director of infection control, circa 1985:

“I’ll tell you very frankly that I’ve managed, either accidentally or somehow intentionally, to not care for more than one or two [AIDS] patients at any one point in time.”

Another doctor:

“Witnessing your own generation dying off is not for the faint of heart.”

Dr. Jerome Groopman, in “ How Doctors Think ( HDT review + notes), does a great job of pointing out that doctors are humans, not econs – so even if they’re well-intentioned, they may, for completely non-monetary reasons, select against treating the sickest or hardest to diagnose (and thus perhaps most needy) patients.  Atul Gawande briefly references the idea of selection bias as well on page 91 of “ The Checklist Manifesto ( TCM review + notes), reviewing why incentive programs for lower complication rates might not be such a good idea.

I hear some of you asking: where’s the inversion angle?  Well, again, this is one of the areas where it’s as important to think backwards as it is to think forwards.  It’s easy to start with a totally reasonable premise that no sane person would disagree with – like, “patients should receive better care” – and then take the seemingly logical step of saying “the best measure of good care is whether or not the patient’s condition improves.”  Thinking purely forward, you would then come up with an incentive system that rewards doctors for showing measurable improvement in their patients’ conditions.

But thinking backwards – inversion – you realize that one easy way to improve your stats or performance is to ship your problem children somewhere else, or refuse to take on situations that have any risk of going south (even if their expected value is meaningfully positive).  Cities have, for example, apparently actually tried to do this with the homeless – I thought it was just a South Park joke, but Oshinsky’s “ Bellevue ( BV review + notes) suggests that it’s not.

Even smart people can get this wrong.  Peter Bevelin’s “ Seeking WisdomSW review + notes), for example, notes on page 49 that people should be paid for performance, not effort – which is perhaps generally true, but has exceptions.

I like citing Richard Thaler’s “dumb principal” discussion in “ MisbehavingM review + notes); I won’t go into it here, since I cover it more depth elsewhere, like in the local vs. global optimization mental model, but the summary is that if executives are fired for poor performance, they often won’t take risks – which could be framed as a form of selection bias.

Indeed, Kip Tindell does a great job of addressing this in “ Uncontainable ( UCT review + notes) – while The Container Store has struggled some recently in a tough retail environment, and with a too-high debt load from an overpriced buyout a decade ago, the company is still doing well relative to its brick-and-mortar peers, selling products that are so hard to market that thieves won’t even steal ‘em.

What’s one of The Container Store’s secrets? Inversion: thinking backwards from the desired result (an environment where employees are willing to take responsibility and risk, within reason) and figuring out how to get there:

“we understand that people make mistakes.  That’s why we create a warm, safe, nurturing workplace that allows employees to take chances without fear of reprisal when they fail.”  

This, with a healthy dose of  empathy, avoids the selection bias problem and encourages people to tackle hard problems that need to be solved, even if the results don’t necessarily look great.

Application / impact: whenever you consider incentivizing people to achieve X metric, think very carefully about other, unwanted ways people could achieve X metric.