Margin of Safety Mental Model (Incl. Redundancy / Resiliency)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Margin of Safety Mental Model: Executive Summary

If you only have three minutes, this introductory section will get you up to speed on the margin of safety mental model.

The concept, briefly: the world is uncertain.  sh*t happens. Not pushing things to the edge – but, rather, leaving what engineers call a “safety factor” to absorb unexpected stresses – can prevent back luck from being fatal.

Three brief examples of margin of safety:

DNA and brain construction.  As explored in Sam Kean’s engaging exporation of genetics,  The Violinist’s Thumb (TVT review + notes), margin of safety is woven into our DNA genetic code.  

While each triplet only encodes for one amino acid, in some cases, multiple triplets encode for the same amino acid: for example, GCT, GCC, GCA, and GCG all encode for alanine.  So many mutations are “silent” and don’t cause problems.

Many other parts of our body are astonishingly adaptable, as demonstrated by the accomplishments of Paralympic athletes or survivors of damaging diseases. 

As discussed by David Oshinsky in the awesome Polio: An American Story (PaaS review + notes), when polio damages nerve cells, for example, the nerve cells that survive often enlarge themselves, although (as discussed later in the book) this also means they wear out faster and cause more long-term disability than may be initially apparent.  We’ll touch on how margin of safety interacts with tradeoffs.

ppl wh cn stl rd ths sntnc wtht th vwls r xtrmly smrt!  Like our genetic language, all human languages include a margin of safety in the form of redundancy: we actually don’t need all the letters we use to communicate.  As you can see, it might take a little effort to read that vowel-less sentence I just made up, but you can do it, and no information is lost.

But what if you were distracted?  Or if it was noisy? Or if the sentence was comprised of a more ambiguous string of letters?  For example, how would you interpret the following?

Th chrl mngr mdl

Highlight the two rows below to see two potential (completely valid) reconstructions:

The Charlie Munger Model

The Churl Manager Medal

You’re probably overjoyed about the first and annoyed about the second.  Having redundancy in our language helps us avoid misunderstandings or mistranslations.

All geometric series that include “0” multiply to zero.  Life is path-dependent, and bad luck – in the wrong places – can have huge cumulative impacts.  Charlie Munger constantly stresses the importance of not risking what you have, and want/need, for what you don’t have, and don’t want/need.

Gregory Zuckerman’s The Frackers (Frk review + notes) provides great examples of both sides of the coin: in the notoriously volatile, boom-and-bust oil industry, some entrepreneurs and executives, like Harold Hamm, understood the value of margin of safety and were never at risk of being taken out of the game.  

Others, like Aubrey McClendon, faced “near-death on a daily basis” – and while McClendon’s Chesapeake was lucky enough to survive, many weren’t.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of the cognition / intuition / habit / stressinversion, or structural problem solving mental models, or our reviews of great books like “Pour Your Heart Into It” (PYHI review + notes), “The Halo Effect” (Halo review + notes), or “How Not To Be Wrong” (HNW review + notes).

Margin of Safety Mental Model: Deeper Look

In the value investing world (where I live, some of the time), “margin of safety”  is more than a mental model.  It’s, like, a lifestyle.

In fact, if value investors ever got drunk and decided, in a passing moment of bad judgment, to collectively get an official Value Investor Wolf Pack tramp stamp, I’m pretty sure it would say “MARGIN OF SAFETY.”  Right there atop the butt cheeks, on their posterior for all posterity.  (Try not to visualize it – value investors don’t tend to be a terribly athletic bunch, on the whole.)

You’re laughing.  Or groaning.  Either way, that’s good.  Humor helps you learn and reduces amydala hijacks.  (See the  cognition / intuition / habit /stress model.)  Anyway, despite value investors’ clear, tattoo-worthy love for margin of safety, few value investors take the time to read a book like Henry Petroski’s To Engineer is Human (TEIH review + notes), which explores the concept of margin of safety in quite some depth in its original context (before we shamefully stole it).   

I can’t recommend the book highly enough; my discussion here will merely brush the surface of Petroski’s fascinating exploration of an underlooked topic.

In its original form, a quantitative “margin of safety” is in fact called a “safety factor.”  In structural engineering, the safety factor is calculated as follows:

“calculated by dividing the load required to cause failure by the maximum load expected to act on a structure.  

Thus if a rope with a capacity of 6,000 pounds is used in a hoist to lift no more than 1,000 pounds at a time, then the factor of safety is 6,000 / 1,000 = 6.”

Petroski goes on to explain why the factor of safety is important: the rope might be weaker than specified, a heavier load might be lifted (in a jerky manner that would increase the forces on the rope), and so on.  The factor of safety is a catch-all factor that mitigates the risk of both the “known unknowns” (situations that are reasonably likely to come up) and “unknown unknowns” (situations that cannot be predicted ahead of time).

There is no universal factor of safety that is appropriate in all circumstances, as Petroski alluded to on pages 27 – 28.  See Jordan Ellenberg in How Not To Be Wrong (HNW review + notes) on marginal utility, and why we spend too much time sitting in airports.

The factor of safety appropriate for an airplane flying at high speed at high altitude (which must continue functioning even under extremely adverse and unlikely conditions) will necessarily be much higher than the factor of safety appropriate for a sneaker shoelace or plastic child’s toy (where the cost of making it as durable and resilient as an airplane or a bridge would likely make it price-prohibitive for its intended use).

Margin of safety gets to be a frame of mind.  For example, when I made a decision earlier this year to start eating a lot more vegetables, I was moderately concerned about the potential for vitamin A overdose (especially after reading Kean’s referenced The Violinist’s Thumb – TVT review + notes – which contains a doubly-terrifying section about how polar bears can viciously murder you both before and after you kill them.)

The awesomeness of Swiss chard is not dose-dependent. Swiss chard needs no margin of safety. You can eat as much of it as you like.
The awesomeness of Swiss chard is not dose-dependent. Swiss chard needs no margin of safety. You can eat as much of it as you like.  Also, I think the lady was probably eating kabocha squash (which are really quite tasty) rather than American-style pumpkins…

It turns out that I have little to worry about, unless I try to subsist entirely off of pumpkins (as one Japanese woman apparently did).  That said, there are plenty of things that are fine in small usage, but at a certain dose can kill you or cause serious damage – the common OTC painkiller Tylenol (acetaminophen / paracetamol), for example, can destroy your liver and cause you to die in agony if you take even a little too much. That’s why, as I discuss in the nonlinearity mental model, I prefer Advil (ibuprofen) – there’s a large margin of safety between the normal dose and a potentially dangerous one.

So now I make a habit of trying to put a margin of safety between me and bad outcomes: I decided not to hike in the Rockies this summer after learning that lightning is very prevalent; I’ll go at another time of year.  

And in the process of researching vitamin B6 – which, to my chagrin, was in an extended-release melatonin pill I’d picked up – I found out that despite being water-soluble, it can cause permanent brain damage at high doses (lovely!)  One report by the UK Department of Health’s Committee on Toxicity contains multiple levels of margin of safety, and even contained a wonderful example of Petroski’s aforementioned safety factor:

Neuropathological damage, including degeneration of the dorsal root ganglia, axonopathy and demyelination have been observed. The lowest reported adverse effect level in animals is 50 mg/kg bodyweight/day in the dog after approximately 16 weeks administration.

We are aware of a report of a no observed adverse effect level in the dog of 20 mg/kg bodyweight/day for 80 days, but, bearing in mind the age of the study and without further experimental detail, we have not used this figure in our consideration.

With a safety factor of 300 (10 for the use of animal data, 10 for interindividual human variation, 3 for the use of a lowest observed adverse effect level) and assuming that an individual weighs 60 kg, extrapolation from the lowest observed adverse effect level in dogs (50 mg/kg bw/day) would give a maximum daily safe dose for humans of 10 mg.

I winced a little bit because drug dosages do not scale linearly with weight – see the sad/sort-of hilarious “Tusko” story in Geoffrey West’s “ Scale ( Scale review + notes), and his ensuing explanation about surface area andscaling laws.  

Nonetheless, you can see the general idea behind a safety factor: it’s your “buffer” between danger and yourself.  (The melatonin tablet, by the way, contained exactly 10 mg B-6. I cut it in half.  And most of the time, I take another one that doesn’t contain B-6.)

The interaction to notice here is nonlinearity: bad events tend to be multiplicative, not additive, so 3 isn’t three times worse than one.

Laurence Gonzales touches on this in Deep Survival ( DpSv review + notes) – which contains some great exploration of margin of safety, as well as models like base rates  and cognition / intuition / habit / stress.

The rest of this model is going to be comprised of a number of very brief (but hopefully thought-provoking) interactions between margin of safety and other mental models in the latticework.

You can read them all if you like, or you can scroll down to the end for one of the less talked-about applications of margin of safety… complete with a South Park reference.

Margin of Safety Inversion x Utility Product vs. PackagingThe Wrong Ways to Use Margin Of Safety

There are three specific incorrect ways to use margin of safety that I’d like to highlight.

Flaw 1: Assuming Margin Of Safety Exists When It Doesn’t

The first wrongheaded approach is to assume a margin of safety where there isn’t one.  For example, Henry Petroski notes on pages 83 – 84 of the aforementioned To Engineer is Human (TEIH review + notes) that if something cracks that isn’t supposed to crack, something went wrong:

“small cracks in reinforced concrete do not necessarily pose any danger of structural collapse, for the steel will resist any further opening of the cracks.

But cracks do signify a failure [… and] incontrovertibly disprove the implicit hypothesis that stresses high enough to cause cracks to develop would not exist anywhere in the structure.”  

In other materials, however, the failure signified by those cracks can be critical. 

For example, the famous O-ring failures on the Challenger which Richard Feynman famously demonstrated with the ice-water experiment. On pages 155 – 157 of The Pleasure of Finding Things Out (PFTO review + notes), Feynman discusses NASA misapplying margin of safety by pointing out that parts that weren’t supposed to crack at all cracked only a third of the way to failure.

NASA argued that because the rings only eroded one-third of the way, there was a safety factor because nothing catastrophic happened if they broke entirely; Feynman noted contrarily that there was “no safety factor at all” because the O-rings “were not designed to erode” under normal operating conditions – so it was “a clue that something was wrong.  He goes on to note that empirical curve fitting that ignored outliers was part of the problem.

A crude analogy I came up with to cement the point in my mind: if you’re chopping vegetables and you cut a third of the way through your finger, you don’t have a safety factor of 2x because you didn’t chop your finger off.  Cuts to your finger indicate an unsafe vegetable-chopping process.

So that’s one failing.  The second is to ignore utility.

Flaw 2: Some Margins Matter More Than Others

In Richard Rhodes’ The Making of the Atomic Bomb (TMAB review + notes), there’s a part where the scientists aren’t yet quite sure that a nuclear chain reaction could actually happen.   Probabilistic thinking, at that point, suggested that the chance was probably low.

However, as Philip Tetlock puts it in Superforecasting (SF review + notes), there are many situations where it’s better to err to one side than the other:

“When it comes to things like terrorist attacks, people are far more concerned about misses than false alarms.”

Leo Szilard, per Rhodes, agreed.  Explaining how he saw things differently from Enrico Fermi, Szilard notes that:

“Fermi thought that the conservative thing was to play down the possibility that [a chain reaction] may happen, and I thought the conservative thing was to assume that it would happen and take all the necessary precautions.”

Clearly Szilard had it right here.  As physicist I.I Rabi put it, per Rhodes,

“Ten per cent is not a remote possibility if it means that we may die of it.  If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it’s ten percent, I get excited about it.”

Flaw 3: Don’t Mess With Scientific Data (Use Margin of Safety *Afterward*)

However, there are other circumstances in which you shouldn’t apply a margin of safety.  Nate Silver argues in “ The Signal and the Noise ( SigN review + notes) that forecasts should be made as accurate as possible, never caving to outside pressure.  Silver’s book is wonderful – and although I disagree with him on that point most of the time, there are circumstances where I wholly agree.

To do good science, as Darwin put it in The Autobiography of Charles Darwin (ABCD review + notes), you need to remember that:

it is a fatal fault to reason whilst observing, though so necessary beforehand and so useful afterwards.”  

In other words, the time to add margin of safety is at the end – not to the scientific data itself. 

As I discuss in the notes to Jordan Ellenberg’s “How Not To Be Wrong” (HNW review + notes), John Ioannidis – author of the provocative “Why Most Published Research Findings Are False” paper – observes that the more latitude or “wiggle room” that study designers have, the more likely the conclusions are to be wrong.

Here’s one real world example.  As Dr. Paul Offit explains in the hugely important Deadly Choices (VAX review + notes), one study that (erroneously) found that the pertussis vaccine caused neurological conditions caved to outside pressure.  The researcher, Miller,

“didn’t want to appear to have whitewashed the issue,”

which led to Miller’s instructions including:

“if there is doubt, code the worst picture.”

As a result, many of the cases that appeared to be related to the pertussis vaccine were, in fact, not – as later studies found out.  But the damage had been done, and this helped start the anti-vaccine conspiracy theory that is currently threatening public health and causing outbreaks of once-controlled diseases – as I discuss in the salience mental model.

Application / impact: margin of safety is a tremendously important concept, but it’s important to apply it in the right way – that is to say, not assuming there is a margin of safety where there isn’t one, and also not incorporating it in a way that could taint the results of scientific study – use it after the data, as the researchers reviewing vitamin B6 did.

Margin of Safety x Salience x Feedback x Structural Problem Solving

I loved this example of “leak-before-break” from Petroski’s To Engineer is Human (TEIH review + notes).

“if a certain type of ductile steel is used for the pipe wall, any crack that develops will grow faster through the wall of the pipe than in any other direction.  This ensures that a crack will cause a relatively small but detectable leak well before a dangerously long crack can develop.”

Indeed, Megan McArdle’s The Up Side of Down (UpD review + notes) discusses how in our modern society, thanks to margins of safety like wide roads, airbags, anti-lock brakes, and so on, we often don’t realize how dangerous our actions are.  McArdle notes that antibiotics cover up a lot of errors because they’re so powerful.

In that sort of situation, making mistakes more salient – often through technology, or through structural problem solving solutions like checklists that raise the activation energy of making a mistake – can help.

Application / Impact: find ways to structurally make potential errors more salient.

Margin of Safety x Critical Thresholds  x Nonlinearity  x Bottlenecks

On page 101 of To Engineer is Human (TEIH review + notes) – you have bought it already, right? – Petroski notes the important concept of a bottleneck:

“it will be the smallest factor that is spoken of as the factor of safety of the structure”

A structure is only as strong as its weakest critical part; it doesn’t matter if 99.999% of your pipeline – or tire – is structurally sound if that 0.001% has a hole in it.

Petroski drives this home with a great discussion of the De Havilland Comet airplane that disintegrated in midair thanks to  – the vast majority of it was structurally sound, but that was little consolation to the passengers (and their families) when some rivet holes turned out to be weak enough to destroy the whole plane.

Rust: The Longest War (by Jonathan Waldman)Later, Petroski also discusses “nucleation sites,” which reminds me of Rust: The Longest War by Jonathan Waldman – quite possibly the best-written nonfiction book I’ve ever encountered. (Rust review + notes).

Waldman’s a master writer who also provides an educational journey through the world of rust, and the engineers who fight it.  Waldman discusses that one of the issues is that corrosion doesn’t distribute evenly, which is a problem –

“after a thousand years, 99.999% of the pipe would still be there, sans weak spots.  

But rust doesn’t work like that. It concentrates in relatively few places, begetting more rust.  [… the company] looks at spots where 35% of more of the pipe’s wall thickness is gone, and where metal loss leaves the pipe at risk of bursting…”  

It reminds me of the quip about a six-foot man drowning in a stream that was six inches on average.  I also, coincidentally, noticed this exact pattern on a shower caddy that’s been lying unused in my shower since college.  (See picture below – notice how much of it is completely fine, but parts of it are extremely rusted.)

This is one example of nonlinearity and bottlenecks.  Another example, from Petroski, is the idea of critical thresholds.

Petroski notes that there is threshold of stress below which:

failure is never observed no matter how many cycles of loading are applied.”  

In practical everyday terms, an example might be a reasonably fit person walking at a slow pace on a soft surface such as lush grass while carrying no weight: if that is the level of stress you’re placing on your muscles and joints, it is extremely unlikely that you will sustain any injuries no matter how long you walk.  This would be below the critical threshold. So no “ margin of safety” is needed.

On the other hand, if you’re above the critical threshold and doing some moderately intense activity, if you do it enough times (say, running every day for months without a break), you might eventually experience failure.  If you’re way above the critical threshold – for example, doing some intense weightlifting, or running really fast, merely a few cycles may cause damage.

Petroski notes that in most cases, it’s not reasonable (from a cost/practicality perspective) to:

overdesign structures so that peak stresses never exceed the threshold level.”  

That’ll lead us right into our next model.

Application / impact: be aware that danger can be nonlinear, and a structure or system is only as strong as its weakest link.

Margin of Safety x Opportunity Costs x Dose-Dependency x Trait Adaptivity

All bridges and buildings could be built ten times as strong, but at a tremendous increase in cost… since so few bridges and buildings collapse now, surely ten times stronger would be structural overkill. - Henry Petroski Click To Tweet

Henry Petroski’s To Engineer is Human (TEIH review + notes) busts the stereotyped conception of an engineer who doesn’t see the rest of the world – Petroski clearly understands how things are.

He notes similar tradeoffs when it comes to aesthetics.  Reinforced-concrete bunkers with dimly lit hallways and giant, thick walls would be the most structurally sound, but we all like our glass and soaring ceilings and open areas, right?

Richard Thaler did some work on the value of a human life (currently at $7 million, by the way).  If that strikes you as unethical, consider this: without having some number to put on the value of a human life, how could we ever justify not putting an ambulance on every corner?

So there’s a dose-dependency angle here: some margin of safety is good.  Too much is too costly and has a high opportunity cost.

Laurence Gonzales, in Deep Survival (DpSv review + notes), quotes someone as saying something like if you never take any risks, you’ll never do anything interesting.

Similarly, Benjamin Franklin, in The Autobiography of Benjamin Franklin(ABF review + notes), also observes what happens when you’re too safe: it turns out preppers have a long American history.

“there are croakers in every country, always boding its ruin.  Such a one then lived in Philadelphia; a person of note, an elderly man, with a wise look.  This gentleman, a stranger to me, stopped one day at my door, and asked me if I was the young man who had lately opened a new printing house.  

Being answered in the affirmative, he said he was sorry for me, because it was an expensive undertaking, and the expense would be lost; for Philadelphia was a sinking place, the people already half bankrupt, or near being so; all appearances to the contrary, such as new buildings and the rise of rents, being to his certain knowledge fallacious; for they were, in fact, among the things that would soon ruin us.  

He gave me such a detail of misfortunes that he left me half melancholy. Had I known him before he engaged in this business, probably I never should have done it. This man continued to live in this decaying place, and to declaim in all the same strain, refusing for many years to buy a house there, because it was all going to destruction; and at last I had the pleasure of seeing him give five times as much for one as he might have bought it for when he first began his croaking.”

You laugh – but people succumb to this trait all the time.  John Hussman would seem to be an example in the investing world.  Worrying too much can make you go off the rails.

Even Charlie Munger – a clear believer in margin of safety – would acknowledge this:

Most people are too fretful, they worry too much. Success means being very patient, but aggressive when it’s time. - Charlie Munger Click To Tweet

Application / impact: everything in moderation, including margin of safety.  Live a little.

Margin of Safety x N-Order Impacts (x Margin of Safety) = Redundancy/ Resiliency

Petroski again, from To Engineer is Human” (TEIH review + notes):

designers often try to build into their structures what are known as ‘alternate load paths’ to accommodate the rerouted […] stress and strain when any one load path becomes unavailable for whatever reason.  

When alternate load paths cannot take the extra traffic or do not even exist, catastrophic failures can occur.”

The impact here is pretty clear.

Margin of Safety Luck vs. Skill x Path-Dependency

Remember when I was talking about Chesapeake?  On a read-through of Clayton Christensen’s classic “The Innovator’s Dilemma” (InD review + notes), I encountered this gem:

“guessing the right strategy at the outset isn’t nearly as important to success as conserving enough resources… so that new business initiatives get a second or third stab at getting it right.  

Those that run out of resources or credibility before they can iterate toward a viable strategy are the ones that fail.”  

It makes total sense.  If you read any entrepreneur story out there, it’s rarely an overnight/instant success – there’s usually a pivot along the way.

Brad Stone’s The Upstarts (TUS review + notes) provides an interesting example of this when exploring Uber and AirBnB also-rans.  So does the aforementioned The Frackers (Frk review + notes) by Zuckerman – some of the original pioneers (like Mitchell Energy) were messing around with fracking for decades.

Margin of Safety Humans vs. Econs x N-Order Impacts

One final one before we get to psychology.

In Henry Petroski’s To Engineer is Human (TEIH review + notes) – yes, I keep quoting it, and by this point you should’ve bought not one but twocopies – give one to a friend – anyway, ignoring my parenthetical run-on sentence, Petroski displays a savvy understanding of  humans vs. econs:

“books of case studies and lists of causes of failures do not easily incorporate this synergistic [attempt to deal explicitly with the human] element, yet the motives and weaknesses of individuals must ultimately be taken into account in any realistic attempt to protect society from the possibilities of major structural collapses.”

This is Don Norman-esque.  Recall Norman, in The Design of Everyday Things (DOET review + notes) – my second favorite book in the world – giving this credo:

We have to accept human behavior the way it is, not the way we wish it to be... we must design our machines on the assumption that people will make errors. - Don Norman Click To Tweet

How users interact with a system is important.  Norman notes examples of  margin of safety like doors that don’t jam when people pile up against them trying to escape a building.  Citing various situations in which errors were caused by humans being human, Norman also notes that important systems should be made for humans – not “ econs” with unlimitedmemory, attention, and willpower.

To Petroski’s point about the human element, Norman also notes  n-order impacts when it comes to security: by trying to make a system more secure with well-intentioned constraints (i.e. by increasing the password-strength requirement), you can inadvertently make it less secure (because users will just resort to workarounds like writing their passwords down on a post-it).

He cites a hilarious example of engineers at Google propping a secure door open with a brick so they wouldn’t have to scan their badges.  If you make things too hard for people, they’ll find workarounds…

Application / impact: consider the human element and how the human will interact with the system to ensure margin of safety.

Margin of Safety Loss Aversion x Path-Dependency = Defensive Pessimism / “Underpromise/Overdeliver”

CARTMAN: [Expletives.] It isn’t fair!  You just built me up to chop me down, didn’t you?!  What about my dream?!?!

STAN: Look, Kyle, Cartman is totally miserable. Even more miserable than he was before because he’s had his dream and lost it.

CARTMAN:  It’s not fair!  It’s not fair! I wanna die!  I wanna diiiiiiiiiiiiie!

It’s amazing how many more valid, insightful mental models you can find in South Park than in some highfalutin’, bestselling books by people lauded as “intellectuals” or “thought leaders,” even among value investors.

This is a good example. In this early episode – “Cartmanland” – certified spoiled brat Eric Cartman inherits a million dollars and uses it to fulfill his lifelong dream: having his very owntheme park, all to himself.

A series of unfortunate events results in Cartman losing said theme park and experiencing a twinge of deprival superreaction syndrome (more formally known as loss aversion).

An understanding of human psychology suggests that the story, while comedic and exaggerated, has a moral (as South Park usually does): that it’s worse to get what you want, then have it suddenly taken away, compared to never getting it at all.  I would be remiss if I didn’t point out the margin of safety angle here.

Incidentally, the Twain-ism about frogs and boiling pots of water is a complete myth. Please do not actually test this. Askeladden Capital does not endorse animal cruelty. Frogs are cute and do not deserve to be boiled alive.  They deserve to be killed humanely first, and then fried and served with a squeeze of lemon.  The non-poisonous varieties they eat in France, anyway 😉  Image attribution: By Dustykid [CC0], from Wikimedia Commons
I’m not going to go deep into loss aversion here (as that’s a separate model you can and should read), but for now, the summary is that research demonstrates we (humans) generally feel losses twice as acutely as equivalent gains.  Or, mathematically, if gaining $100 brings us X units of utility/joy, then losing $100 takes away 2X units of utility.

Throw in another psychological phenomenon – contrast bias, of which a subset is “ hedonic adaptation,” or the tendency of our happiness levels to gradually adjust to our circumstances over time – and the stage is set for an interesting interaction.  A frog may not notice if the water around it gets a little bit warmer, but it’ll sure as heck notice being thrown into a boiling pot.*

It doesn’t take a math genius to recognize that the net emotional impact of repeatedly gaining and losing things is no bueno (even if most of us handle the vagaries of life with more of a sense of equanimity than Eric Cartman).

From a structural problem solving perspective, though, there’s no reason to just stand idly by and “accept our lot in life.”  That’s not mental models approved.  It turns out there is something we can do about it, utilizing margin of safety.

I don't have a review because I've forgotten too much of the book, but I did enjoy this one.
I don’t have a review because I’ve forgotten too much of the book, but I did enjoy this one.

If you’ve read the overconfidence and probabilistic thinking mental models, you understand why we don’t agree with the common consulting slogan “often wrong, never in doubt.”  On the contrary, we do very much agree with another bit of business jargon that most consultants worth their salt probably have tattooed somewhere unmentionable:

Underpromise and overdeliver!

While I’ll leave the nuances of this in a business context to other, more capable people, let’s delve into the personal, psychological aspects of it, because it turns out there’s a strongly backed neuroscientfic basis.

For some context, I’m generally a very cheerful person.  I’m not a fatalist. I have a strong belief in my own agency.  I’m very satisfied with how my life is going and don’t regret very many of the decisions I’ve made.  I’m looking forward to the future.

I’m also what’s called a “defensive pessimist” and I rarely expect factors outside of my control to go my way.  In fact, on many occasions, I expect them to go the opposite way.

Why do expectations matter?  As Shawn Achor discusses on pages 69 – 70 of The Happiness Advantage” (THA review + notes):

“expectations create brain patterns that can be just as real as those created by events in the real world.”

Other authors citing the neuroscience literature discuss similar phenomena: in The Up Side of Down (UpD review + notes), Megan McArdle notes that dopamine levels rise in anticipation of something happening… and plummet if they don’t.

Charles Duhigg makes a similar point in The Power of Habit (PoH review + notes), referencing a monkey, Julio, who was trained to associate a certain shape on a screen with a reward.

Eventually, Julio’s monkey brain flashed with pleasure “before the juice arrived.”

What happened when the experiment changed?

“When the juice didn’t arrive or was late or diluted, Julio would get angry and make unhappy noises, or become mopey […] 

when Julio anticipated juice but didn’t receive it […] desire and frustration erupted inside his skull […] that joy became a craving that, if unsatisfied, drove Julio to anger or depression.”

Thus, the easy solution is to set up your expectations so that life tends to meet/exceed expectations rather than being a string of disappointments.  Here’s the data in a tabular form:

This isn’t a unique idea I’ve come up with, of course.  As Richard Thaler references in either Misbehaving (M review + notes) or Nudge (Ndge review + notes), one of his colleagues mentally budgets a reasonable, conservative amount each year that will either go to charity or be used for unexpected mishaps (traffic tickets, insurance deductibles, the like).  

You can see the logic of defensive pessimism: most of the times you have a small mishap, you’ve already budgeted for it, and most of the time at the end of the year, you’ll have money left over that you can give to charity.  So it’s smiles all around, rather than frowns.

The challenges could relate to work finances or goals rather than personal ones, of course.  McArdle discusses the challenges faced by salesmen who have to “smile and dial,” and entrepreneur stories pound this point home.  Howard Schultz discusses in Pour Your Heart Into It (PYH review + notes) – the Starbucks origin story – how he was turned down by over 200 investors (out of about 250) when seeking funding to expand Starbucks beyond Seattle.  

Similarly, as I touch on in the nonlinearity model, Brad Stone’s The Upstarts (TUS review + notes) highlights how much trouble AirBnb had getting funding (and users), to the point where cofounder Brian Chesky is quoted as saying:

When you’re starting a company it never goes at the pace you want or the pace you expect.  

You imagine everything to be linear… you start, you build it, and you think everyone’s going to care.  

But no one cares, not even your friends.”

Clearly, the base rate for prospecting success when you’re launching a new business is dismal… a base rate I completely ignored (to my emotional detriment) early on in Askeladden’s life.  I’d be over the moon every time I got an email from a prospective client, then the opposite every time it turned into a “no,” or worse, a trail-off no-response.

What I realized over time (and from talking to mentors) is that the conversion rate is so low that it’s best to assume any new lead won’t actually result in capital.  Since I adjusted my thinking to reflect that reality – with a margin of safety baked in, to the point where my working assumption is that no lead will result in capital – any leads that have turned into clients have made me happy, while any leads that haven’t didn’t lead to disappointment.

If you’re ever in Dallas, go to Velvet Taco. (And also Ida Claire.)  Some of the best tacos and gourmet Southern-inspired food in town, respectively.  As for enchiladas: honestly, you can’t go wrong.  Any place that doesn’t have a drive-thru will do ya just fine.

The stakes don’t always have to be personal-theme-park level, or even thousands of dollars.  This is a technique you can use in your everyday life. A little anecdote: when I was a teenager, my mom had this annoying habit of changing her mind about what we’d do for dinner on a near-hourly basis.  (Decisions aren’t her strong point.)

Occasionally, she’d throw out an idea I really liked – say, going to our favorite Tex-Mex place for enchiladas – then she’d deep-six the idea in a matter of minutes in favor of, like, anything that wasn’t enchiladas, leaving me really disappointed.

My dad would always just laugh.  “Food is food,” he said, and while – as a dedicated foodie – I couldn’t disagree more, I eventually learned from him to just not have any expectations for dinner until it was on the table.

So, whether it’s us expecting something, or someone we interact with, the kindest, most  empathetic thing we can do from an emotional standpoint is to attempt to underpromise and overdeliver.

It’s worth noting that there are contrasting perspectives on this topic that deserve a listen: psychologist Shawn Achor, for example, vehemently argues in Before Happiness (BH review + notes) that defensive pessimism is absolutely the wrong way to go about things.

I generally have a huge amount of respect for Shawn and have benefited immensely from his work and his wonderful books, including all the parts of “ Before Happiness” that don’t talk about defensive pessimism.  But I’ve tried both ways, and defensive pessimism – i.e. margin of safety x loss aversion – has by far been the one that’s led to a more positive emotional outcome.

Application/impact: margin of safety extends beyond physical and financial situations, and is equally helpful as a lens through which to make personal / emotional decisions.

Further Reading on Margin of Safety

The two best books on margin of safety you can read are :

– Henry Petroski’s “To Engineer is Human” (TEIH review + notes), and,

– Don Norman’s “The Design of Everyday Things” (DOET review + notes).

As discussed, Petroski’s book thoroughly and rigorously covers the “safety factor” from an engineering perspective; meanwhile, Norman goes a lot deeper into the “Google brick” problem – i.e., designing systems and products that take into account  humans vs. econs and  n-order impacts.

Also consider checking out the  structural problem solving model – the idea of averting challenges before they happen has a much broader application than just margin of safety.