Scientific Thinking / Overconfidence / Intellectual Humility Mental Model (Incl Fundamental Attribution Error, Planning Fallacy, Desire Bias)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Scientific Thinking / Overconfidence / Intellectual Humility Mental Model: Executive Summary

If you only have three minutes, this introductory section will get you up to speed on the overconfidence / scientific thinking mental model.

The concept in one quote:

Science is a form of arrogance control. - Carol Tavris + Elliot Aronson Click To Tweet

From the thought-provoking “ Mistakes were Made (but not by me) ( MwM review + notes).

The concept in one sentence: humans have a natural tendency to be overconfident; most successful people temper this with a process of scientific thinking: enforced intellectual humility.

Key takeaways/applications: overconfidence can be difficult to give up, but having a more realistic view of our capabilities enables us to approach the world more adaptively.

Three brief examples of overconfidence / scientific thinking / intellectual humility:

All the children in Lake Wobegon were smarter, more mature, and more adorable than average.  Studies routinely demonstrate that the majority of people believe they’re above average on any given issue; one found, for example, that merely 2% of students believed they had below-average leadership skills.  What a logical conundrum!  Napkin math will demonstrate that, by definition, more than 50% of people cannot be above average – or, as Shawn Achor quips in “ The Happiness Advantage” ( THA review + notes):

99% of Harvard students do not graduate in the top 1%. - Shawn Achor Click To Tweet

This is sometimes referred to as “Lake Wobegon syndrome,” in reference to a fictional town where “all the children were above average.”

You can be right… or you can make money.  In fields such as investing and business management, if our views conflict with reality, reality’s gonna win – to our detriment.  Most successful entrepreneurs have been described as phenomenally open to learning from experience, staying humble and utilizing a scientific thinking approach to determine what’s really true and what’s not.  

For example, in Brad Stone’s elucidating ‘ The Everything Store ( TES review + notes), longtime Jeff Bezos right-hand man Rick Dalzell described one of Bezos’s key strengths:

“He embraces the truth. A lot of people talk about the truth, but they don’t engage their decision-making around the best truth at the time.”

Stone does a good job of demonstrating that Bezos, too, was guilty of overconfidence on occasion, including ill-fated forays into areas like jewelry – but to Bezos’s credit, he learned rather than sticking to his guns, avoiding commitment bias and redeploying resources to more productive areas.

Buy one gallbladder surgery, get one free?  As discussed in the mental models / rationality model, knowing our “ circle of competence” prevents us from acting like a “ man with a hammer” to whom all the world looks like a nail.

Even highly educated and highly trained professionals – such as doctors – can sometimes have a tendency to overemphasize the importance of their specialty, seeing the world through the schema of only their own work.

Cass Sunstein and Richard Thaler wryly note in “ Nudge (Ndge review + notes) that “watchful waiting” is underprescribed – perhaps because few doctors specialize in watchful waiting ( incentives), and are overconfident in their area of expertise.  (S/T also talk about the dramatic overconfidence of newlyweds, which I discuss in the  hyperbolic discounting mental model.)

Books like Jerome Groopman’s “ How Doctors Think ( HDT review + notes) and David Oshinsky’s “ Bellevue ( BV review + notes) provide more fascinating examples of this – I reference one from the latter in the culture / status quo bias mental model.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of theinversionBayesian reasoning, or structural problem solving mental models, or our reviews of great books like “ Deadly Choices” ( VAX review + notes), “ The Signal and the Noise” ( SigN review + notes), or “ Why We Sleep” ( Sleep review + notes).

Overconfidence / Scientific Thinking Mental Model: Deeper Look

The weakness of the relationship between accuracy and confidence is one of the best-documented phenomena in the 100-year history of eyewitness memory research. - Psychologist John Kilstrom Click To Tweet

That quote, from the wonderful “ Mistakes were Made (but not by me) ( MwM review + notesby psychologists Carol Tavris and Elliot Aronson, illustrates overconfidence pretty well.  We’ll touch deeper on this book in a moment.

I think the idea of overconfidence is pretty straightforward, so I’m not going to belabor it.  The rest of the model will take three parts:

First, how and why does overconfidence perpetuate itself?

Second, how do scientists – and other smart, successful individuals – avoid the trap of overconfidence?

Third and finally, what are some of the unique and important phenomena arising from overconfidence interacting with our selective perception?

The sections are mostly independent and you’re welcome to read all or only the ones you’re interested in.

Overconfidence Incentives x Social Proof (x Dose Dependency xTrait Adaptivity) = More Overconfidence

(If you’d like to skip this section and get to scientific thinking, you’re welcome to do that – just scroll down or ctrl+F for “scientific thinking” in your browser.)

I always find it helpful to understand why a mental model exists.

The way memory works is associative, so when we have “hooks” to hang new learnings on, we’re more likely to remember them.  When it’s just a bunch of random, uncorrelated facts – the type most of us had to memorize in school, then promptly forgot – we’re not likely to recall them.

In the case of overconfidence, there are a few reasons that it exists and persists.  The first is that overconfidence is, in fact, dose-dependent (a form of nonlinearity).  That is to say, too much is bad, but a little is actually better than none.  Why? As Megan McArdle quips in “ The Up Side of Down ( UpD review + notes),

“There’s a scientific name for people with an especially accurate perception of how talented, attractive, and popular they are – we call them ‘clinically depressed.”

Similarly, progress requires a little bit of hubris: moving the world forward requires believing you’ve figured out a better way to do things than everyone else in human history.  Indeed, Henry Petroski disputes the classical “belt-and-suspenders” conception of engineers in “ To Engineer is Human ( TEIH review + notes), observing that engineers are constantly pushing the boundaries – trying to build structures that are taller, lighter, and faster.

My favorite line on this is Peter Thiel’s perspective from “ Zero to One ( Z21 review + notes) – which we’ll return to later in the model:

If you expect an indefinite future ruled by randomness, you’ll give up on trying to master it. - Peter Thiel Click To Tweet

Of course, a little overconfidence can help even if you’re not trying to change the world.  I discuss this in depth in the learned helplessness ( agency) model, so I won’t go in depth here, but suffice to say that feeling in control is a meaningful predictor of success in situations as tame as life and business or as life-and-death as being stranded at sea with no food or water.  McArdle puts it this way in the aforementioned “ The Up Side of Down ( UpD review + notes):

“Successful people have what psychologists call ‘self-efficacy’ or an ‘internal locus of control’: they feel that outcomes mostly depend on what they do.  

People who believe that they can control their fate are more likely to have happy futures even if they’re wrong about the extent of their control.”

Similar concepts are explored in the work of Shawn Achor (“ The Happiness Advantage” – THA review + notes) and many other books as well.  It’s probably good for most of us (in most situations) to believe we have a little more ability to handle things than we actually do.

For example, Laurence Gonzales explores in the fascinating “ Deep Survival ( DpSv review + noteshow there’s a self-fulfilling prophecy when it comes to survivors: having the audacity to believe you’ll survive a hopeless situation is obviously completely overconfident, but without that belief, you won’t have the will to fight it out.  

Of course, too much overconfidence is bad – Gonzales notes that tough-guy “Rambo types” are often the first to perish.

In more normal everyday situations, the truth is that overconfidence serves us well in many parts of our lives, even beyond its psychological benefits to ourselves.  Other people like overconfidence.  In three words, confidence is sexy. This pops up in many different contexts.

For example, Philip Tetlock’s “ Superforecasting ( SF review + notes) is an awesome book about how ordinary people like you and me can make more accurate predictions than experts – on their topics of expertise – by following a specific set of mental processes.  Naturally, overconfidence (and its avoidance) come into play here.

Tetlock spends some time analyzing the topic, and notes that one of the reasons experts display overconfidence is that they are literally paid to – and we know how powerful a force incentives are.  A President famously grouched that he didn’t want a “many-handed economist” who constantly said “on the one hand… but on the other hand.”  If you wanted to be an economist for that President, you’d need to stick to one hand.

Many-handedness, while the most intellectual honest and humble way of thinking, doesn’t play well on TV, or in the boardroom.  We want confidence. On pages 138 – 139, Tetlock explains:

“people equate confidence and competence[…]

one study noted, ‘people took such [many-handed] judgments as indications that the forecasters were either generally incompetent, ignorant of the facts in a given case, or lazy, unwilling to expend the effort required to gather information that would justify greater confidence.’”

Similar results show up in other fields.  A good portion of Tavris / Aronson’s “ Mistakes were Made (but not by me)  ( MwM review + notes) focuses on overconfidence by law enforcement and prosecutors, which can lead to innocent people being convicted of crimes they didn’t commit.

Unfortunately, there too, peoples’ behavior incentivizes this sort of overconfidence: T/A note that probabilistic testimony is often looked down upon:

many judges, jurors, and police officers prefer certainties to science.”  

You can find similar examples in many other fields.  To drive the point home, I’ll cite one more example, this time from medicine.  Dr. Jerome Groopman’s “ How Doctors Think ( HDT review + notes) – which we’ll return to in the next section – evaluates diagnosis-distorting factors like overconfidence, and explores how doctors can overcome them.

Part of the problem, as Groopman explains, is that patients are looking for certainty – the relief and surety of a diagnosis.  Nothing is more frustrating than having an ailment that’s undiagnosable (as many of us likely understand from personal experience – either our own or that of a friend / colleague / loved one).  

Meanwhile, doctors’ egos also take a blow when they can’t diagnose or treat a patient; there’s a long history of selection bias, for example – David Oshinsky’s medical-history “ Bellevue ( BV review + notes) reviews how other hospitals in New York City shunted the hopeless cases off ontoBellevue doctors, who couldn’t turn them down.  

In more contemporary medical practice, this still shows up: Dr. Harrison Alter, an emergency-room physician cited by Groopman, notes that some doctors:

“like the image that we can handle whatever comes our way without having to think too hard about it – it’s kind of a cowboy thing.”  

At the end of the day, though, it’s important to temper that natural overconfidence – and the overconfidence that it might be adaptive for us to display in certain situations, like job interviews or dates – with internal intellectual humility.  

Even certifiable optimist Shawn Achor, the positive-psychologist author of the earlier-quoted “ The Happiness Advantage ( THA review + notes), discusses in his follow-up “ Before Happiness that if you don’t wear your seatbelt, you’re not an optimist – you’re an idiot.  Achor notes:

“Irrational optimists see the world through rose-colored glasses without realizing that those tinted lens[es] don’t enhance their vision, they distort it.  

And as a result, their decisions and actions are Pollyannaish and flawed. You can’t sugarcoat the present and still make good decisions for the future.”

Application / Impact: in many situations – i.e. any where we’re interacting with other people – sounding more certain than we are often creates a more favorable impression.  Being mindful of walking this tricky tightrope, and – equally importantly – not punishing others for saying “I don’t know” or being uncertain – can help reduce the spread of overconfidence.

Overconfidence Inversion x Probabilistic Thinking = Scientific Thinking / Intellectual Humility

“scientists are trained to be cautious.  They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing.  

And they must seriously consider the possibility that their initial hunch is wrong […] such scientific caution runs against the grain of human nature […]

our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability.”

That quote, from the insightful Philip Tetlock in the aforementioned ‘Superforecasting” (SF review + notes), highlights the intersection of overconfidence and the human tendency of storytelling.  I’ll stay away from the latter because it counts as its own mental model, but let’s focus on the first part of Tetlock’s quote: can we learn from the scientific process?

The answer is yes: both from how it works and how it doesn’t.  In truth, scientists often fail to think this way: as Thomas Kuhn notes in “ The Structure of Scientific Revolutions ( Kuhn review + notes):

“Phenomena that will not fit in the box are often not seen at all… [scientists] are often intolerant of [new theories] invented by others.”

This shows up in plenty of real world examples: my favorite book, Richard Thaler’s “ Misbehaving ( M review + notes), provides a great peek into how classical economists largely stubbornly refused to face the reality that the rational-actor model was profoundly irrational, and that we’re all humans, not econs.  As this relates to value investing, a clear violation of the efficient-market hypothesis, Thaler observes:

“It was not so much that anyone had refuted Graham’s claim that value investing worked; it was more that the efficient market theory of the 1970s said that value investing couldn’t work.  But it did.”

Similarly, Meredith Wadman’s “ The Vaccine Race ( TVR review + notes) overviews how many scientists first decided that human WI-38 cells for vaccine production weren’t worth evaluating, then reinforced that belief with increasingly circuitous logic.

Nonetheless, while examples like those are worth seeking out and considering, scientists in general do tend to take a much more cautious approach.  The Vaccine Race, for example, also highlights great scientific thinking on the part of Leonard Hayflick, who developed the WI-38 cell lines.

Upon observing an unexpected result (that one of his cell cultures was starting to show signs of struggling), Hayflick methodically ruled out potential causes such as dirty glassware, bacterial or viral contamination, etc, before coming to the conclusion that the cells were just aging – contradicting previous scientific belief that non-cancerous cell lines lived forever in the absence of incompetence by the experimenter. 

Note how it’s a stepwise process of inversion: rather than simply seeing one data point (my cells are dead) and overconfidently jumping to a conclusion (cells are mortal), Hayflick ruled out other reasonable possibilities one by one until the parsimonious explanation was that the cells were, indeed, mortal.

This is just one of many great examples of scientific thinking from her book; others can be found in David Oshinsky’s “ Polio: An American Story ( PaaS review + notes), with regard to how Jonas Salk and other scientists designed the vaccine that conquered polio.

This sort of scientific thinking is inherently and inextricably linked to the probabilistic thinking mental model – i.e., scientists tend to evaluate the world with degrees of belief rather than absolute right-or-wrong certainty.

For now, however, take a look at this excerpt from Jennifer Ackerman’s “ The Genius of Birds ( Bird review + notes).  Ackerman’s a science writer rather than a scientist, but her exploration of avian intelligence is marked by fantastic and cautious scientific thinking.

It can be tempting to draw wildly overreaching conclusions from experiments, but Ackerman tends not to do that.  Read the book and you’ll see what I mean.  For example, interpreting some experiments on pages 31 – 32 about “bird IQ,” she notes:

“It’s tricky, however.  In these kinds of lab tests, all sorts of variables may affect a bird’s failure or success.  The boldness or fear of an individual bird may affect its problem-solving performance.

Birds that are faster at solving tasks may not be smarter; they may just be less hesitant to engage in a new task.  So a test designed to measure cognitive ability may really be measuring fearlessness.

“Unfortunately it is extremely difficult to get a ‘pure’ measure of cognitive performance that is not affected by myriad other factors,” says Neeltje Boogert [..] a bird cognition researcher at the University of St. Andrews.”

Similar to Hayflick thinking about all the possible causes of cells dying and working through them one by one, Ackerman examines all the possible reasons birds could perform well (or poorly) on a test designed to measure their intelligence, and works through them.

This understanding of multicausality and alternative interpretations is the hallmark of scientific thinking.  Richard Feynman’s ‘ The Pleasure of Finding Things Out ( PFTO review + notes) is another great example thereof, again combining scientific thinking with probabilistic thinking:

“I can live with doubt and uncertainty and not knowing.  

I think it’s much more interesting to live not knowing than to have answers which might be wrong.

I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean.  

[…] I don’t feel frightened by not knowing things […] it doesn’t frighten me.”  

This is one of those models that’s easy to understand, but harder to implement.  Half the battle is simply being aware of it and adjusting for it. For example, Nate Silver – who thoughtfully explores the topic of confidence as it relates to data analysis – mentions in the impactful ‘ The Signal and the Noise ( SigN review + notes) that:

“Elite chess players tend to be good at metacognition – thinking about the way they think – and correcting themselves if they don’t seem to be striking the right balance.”

I’ve found this approach to be helpful as well.  Again, without getting too deep into the probabilistic thinking mental model, I find the idea of counterfactuals to be very helpful.  Achor discusses the idea of “adding vantage points” – meaning, learning to see the same information from different perspectives – and I find that to be helpful.

Groopman makes similar points in “ How Doctors Think ( HDT review + notes).  One of the doctors he cites recommends:

“even when I think I have the answer, to generate a short list of alternatives.”  

To return to the aforementioned “ Deep Survival ( DpSv review + notes), one important thing to remember is that overconfidence interacts with selective perception – if we’re looking for something, our brains automatically (and efficiently) filter out what it deems to be ‘irrelevant’ information.  This is why, for example, if we think a book is blue, our eyes might pass over the title a hundred times before we finally remember the book is red – at which point it pops into sight instantly.

As Gonzales puts it in Deep Survival, one risk of overconfidence is that we may not even notice critical information:

“the implicit assumption is that you know what you’re doing and know what sort of perceptual input you want […]

such a closed attitude can prevent new perceptions from being incorporated into the model.”  

As with scientists seeking the truth, Gonzales explores how people in survival situations need to remain open to new inputs from the world.

I find that a helpful exercise in my daily life is, whenever something happens – “oh, this can on our back porch was knocked over” – to counter my natural storytelling, cause-seeking narrator – “it must’ve been that possum again, looking for food” – with a few additional explanations – “… but it was also really windy last night during the thunderstorm, and maybe I accidentally kicked it over when I was moving that table.”

A powerful cognitive tool is, instead of quickly jumping to conclusions, simply admitting: I don't know. Then asking: what would I need to do to find out? Click To Tweet

I go into more detail in the  feedback mental model on  decision journaling, a powerful tool that increases the  salience of our mistakes, helping to correct overconfidence and more accurately calibrate our judgments.

For now, we’ll leave this section with one more pithy quote from “ Superforecasting ( SF review + notes):

For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. - Philip Tetlock Click To Tweet

Application / Impact: approaching the world the opposite of how we usually do – i.e, trying to evaluate many alternatives and eliminate the ones that don’t make sense, rather than just jumping to what seems like the most obvious conclusion – is how smart scientists and business managers temper overconfidence.

Overconfidence x Schema / Selective Perception = Fundamental Attribution Error, Planning Fallacy, Desire Bias, Ideology

One of the most insidious aspects of cognitive biases is that our naturally overconfident selves can often frame them as other people problems.”  

To go back to the “all drivers think they’re above average” tautology, I don’t think I’ve ever met anyone in Dallas who didn’t complain about “those damn drivers on I-35.”  Naturally, all of us who live in Dallas – in total – pretty much comprise “those damn drivers on I-35,” so to some extent we’re complaining about ourselves.  

But nevermind that we occasionally cut people off or don’t let people merge: it’s those other drivers who are the problem!

This sort of thinking is so prevalent that psychologists have a formal name for it: fundamental attribution error, which basically boils down to the following matrix of possibilities:

If we do something… If other people do something…
Good We’re skillful and thoughtful. They’re lucky.
Bad We’re still a good person; there was just an extenuating circumstance that, unfortunately, forced our hand. They’re horrible people and there’s no excuse for what they did.

This pops up in a lot of psychology books but is most thoroughly addressed by the aforementioned “ Mistakes were Made (but not by me) ( MwM review + notes) by psychologists Carol Tavris and Elliot Aronson.

I can’t recommend the book highly enough: it provides a thorough construction of the psychological mechanisms that allow us to see others’ mistakes clearly, while completely overlooking our own.

Part of this is biological/structural, due to the way our memoryworks.  Tavris/Aronson note that the area between conscious lying and unconscious self-justification is:

“patrolled by that unreliable, self-serving historian – memory.”  

The other half of it is due to schema: the idea that we interpret reality through the lens of our own worldview and experiences.  Practically, this means that:

“pain felt is always more intense than pain inflicted,

which explains why, as Tavris/Aronson put it:

“the remarkable thing about self-justification is that it allows us to shift from [victim to perpetrator] without applying what we have learned from one role to the other.”  

See the schema mental model for more details, as well as, of course, “ Mistakes were Made (but not by me).”

Another example along a similar bent is billionaire serial entrepreneur Peter Thiel’s observation on advertising in the thought-provoking Zero to One ( Z21 review + notes):

“In Silicon Valley, nerds are skeptical of advertising, marketing, and sales because they seem superficial and irrational.  

But advertising matters because it works. It works on nerds, and it works on you.  You may think that you’re an exception; that your preferences are authentic, and advertising only works on other people.

[…] but advertising doesn’t exist to make you buy a product right away; it exists to embed subtle impressions that will drive sales later.  

Anyone who can’t acknowledge its likely effect on himself is doubly deceived.” 

All of the above should drive home the point that overconfidence, like any other cognitive bias, is certainly not an “other people problem.”  Instead, it’s an all of us problem.

As Stephen Covey puts it in the landmark “ The 7 Habits of Highly Effective People ( 7H review + notes),

If you start to think the problem is out there, stop yourself. That thought is the problem. - Stephen Covey Click To Tweet

Of course, the concept of fundamental attribution error pops up elsewhere, too.  Christopher Browning’s “ Ordinary Men ( OrdM review + notes) is a fascinating, if chilling, example.  

Reviewing the story of a small reserve battalion of middle-aged, working-class Germans tasked with carrying out genocide – men who, as Browning puts it, were pretty much the least likely sample sizefor “Nazi murderers” – Browning points out fundamental attribution error on a couple different levels.

First, when interviewed decades later, many of the men themselves expressed fundamental attribution error, putting a lot of the blame on Poles who helped out, or really anyone but themselves, because they simply couldn’t conceive of how they could have committed such atrocious acts.

Moreover, Browning cites psychology professor Ervin Staub, who posited:

“Evil that arises out of ordinary thinking and is committed by ordinary people is the norm, not the exception.”

Indeed, having thoroughly reviewed how very “ordinary” the Ordinary Men of Reserve Police Battalion 101 were, Browning asks:

“If the men of Reserve Police Battalion 101 could become killers under such circumstances, what group of men cannot?”

This is a profoundly uncomfortable line of thinking for many people because it violates the idea of in-group / out-group behavior: we like thinking that we’re the good people “over here” completely separate from all the bad people “over there.”  

That may be true to a degree, but certainly not to the degree we frequently assume: many experts on crime and psychology will tell you that locks on doors are to keep the good guys out… not the bad guys.

On a lighter note, our limited schema  can intersect with overconfidence and the world in another way: the planning fallacy, or the idea that things take three times as long and cost three times as much as you expect.  

You’ll find this in pretty much any book that discusses anyone trying to accomplish anything – as well as, ahem, perhaps in your own experience as a high schooler or undergraduate.  

It’s so prevalent that PAA house favorite Don Norman goes so far as to make it into “Norman’s Law” in my second-favorite book, “ The Design of Everyday Things ( DOET review + notes):

The day a product development process starts, it is behind schedule and above budget. - Don Norman Click To Tweet

I won’t go deep here because there’s plenty of well-known research; google “megaprojects,” for example, and you’ll find multiple articles deconstructing why and how they go over time and over budget.

What’s the antidote?  Again, decision journaling (see feedback), as well as base rates – figuring out how long similar projects have actually taken and how much they actually cost, either within your organization or in others’ organizations.  (See also the  Bayesian reasoning mental model, where I use the planning fallacy as an example of why  priors and conditional probabilities are important.)

A final intersection of overconfidence with schema is what I call “desire bias,” and what’s commonly known as “wishful thinking” – if we want something to happen, we tend to overestimate its likelihood of happening.  

Again, education and intelligence don’t exempt us from this: Groopman overviews a few examples in “ How Doctors Think ( HDT review + notes).

One involves a patient Groopman liked and thus didn’t want to put him through discomfort; Groopman therefore reasoned “enough for today” (because that’s the answer he wanted to be true) and failed to lift him up for a full-body inspection that would have prevented the development of a dangerous abscess.

A second example comes later in the book: in the late ‘90s, Groopman:

“Became enamored with a family of new medications because of my own clinical condition […] that made it impossible for me to pursue my favorite sport, distance running […]I never relinquished a sense of loss.  Then a colleague told me about novel anti-inflammatory medications then under development: COX-2 inhibitors, which eventually were marketed as Celebrex and Vioxx.

My enthusiasm [about COX-2 inhibitors]… [was driven by] my desire to believe that there would be a way to temper, if not reverse, the degenerative changes in my spine… [and] help prevent Alzheimer’s, [which caused Groopman’s grandfather, a sweet and gentle man, to die] unable to recognize any of [his family].”

Please note that I’ve heavily condensed that quote from several pages of Groopman’s discussion (and additional context before and after) for readability in this format, as I’m always mindful of “fair use” doctrine and try to keep excerpts as brief as possible so as to not diminish the market value of any of the books I cite.

Nonetheless, I don’t think the condensation changes the takeaway from Groopman’s discussion, and hopefully you’ll agree on your own full reading of “ How Doctors Think ( HDT review + notes).

Of course, Groopman notes that COX-2 inhibitors didn’t turn out to be a panacea: Vioxx was (famously) pulled from the market over safety concerns, while Celebrex is less effective and less safe than once hoped.

In fairness, much of this evidence came out after Groopman had these hopes, and Groopman is man enough to point out his own flaws rather than just commenting on those of others (which he does, as well, in the same chapter, in the same context).  But still, he seems to violate the principles of scientific thinking here, lured into overconfidence about the promise of the new drugs by the allure of desire bias.

It’s an error any of us could make.  Groopman, as any reader of “How Doctors Think” would agree, seems like an incredibly thoughtful doctor that any of us would be lucky to have if we ever needed his services – Groopman’s careful and analytical deconstruction (disaggregation) of his own mistakes, as well as those of others, and his own experience seeking advice for a hand condition drive home just how much he thinks about arriving at correct diagnoses.

If someone of that caliber of thought can fall prey to the desire bias trap, surely so can we.

What are the antidotes?  As mentioned earlier, one is decision journaling, discussed in the feedback mental model, which at least helps us become more aware of our decision failings (hopefully leading us to do something about it).

A second, expressed by Tavris / Aronson, is self-explanatory: avoid echo chambers.  I reference in the schema mental model how danger it is to cocoon yourself in strong ideology.

It’s easy and comfortable to surround ourselves with people who agree with us, but that’s not always what we need.  As Dumbledore says in Harry Potter and the Sorcerer’s Stone,

It takes a great deal of bravery to stand up to our enemies, but just as much to stand up to our friends. - Albus Dumbledore Click To Tweet

Asking our friends to point out to us when we’re going down a rabbit hole – and being willing to do the same for them – can help.  As Tavris/Aronson put it,

We need a few trusted naysayers in our lives, critics willing to puncture our protective bubble of self-justifications and yank us back to reality if we veer too far off. - Carol Tavris + Elliot Aronson Click To Tweet

Application / impact: being aware of the limitations of our own memory – and the way our schema can intentionally distort, or selectively omit, key aspects of reality – can help us fight overconfidence and make more accurate judgments, for the benefit of ourselves and the people we care about.