Schema Mental Model (Incl Confirmation Bias, Selective Perception, Ideology, Framing)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Schema Mental Model: Executive Summary

If you only have five minutes, this introductory section will get you up to speed on the schema mental model.

The concept in one sentence: our perceptions of the world are selective/incomplete, mediated by what we’re focused on and what we already believe. 

One quote: The below quote is one of Jeff Bezos’s bedrock philosophies, as explored in Brad Stone’s “ The Everything Store ( TES review + notes)

Point of view is worth 80 IQ points. - Alan Kay Click To Tweet

Key takeaways/applications: this is probably the most important mental model that exists, because it affects our ability to understand every other one.

Psychologists often use the analogy of a "lens" to explain schema: it literally serves as a filter between the world and what we see. This is perfectly illustrated by the photo above, taken by my friend Alex Treece.
Psychologists often use the analogy of a “lens” to explain schema: it literally serves as a filter between the world and what we our brains perceive. In “Cognitive Behavior Therapy” (CBT review + notes), Dr. Judith Beck uses black-painted glasses as a metaphor for how people with depression see the world, for example: everything looks dark (even if it really isn’t).  This “lens’ metaphor is perfectly illustrated by the photo above, taken by my friend Alex Treece.

Many people are familiar with confirmation bias, but few are familiar with the underlying mechanisms.  Understanding how our perceptions work allows us to make more accurate decisions by attempting to broaden our perspectives and “add vantage points” to literally see the same situation in different ways.  

Similarly, understanding how others’ perceptions work (empathy) allows us to communicate and lead more effectively. 

Important submodels we’ll discuss include selective perception, confirmation biasideology, and framing.

Three brief examples of schema / selective perception / confirmation bias / ideology / framing:

Would you land a 747 on an occupied runway?  Don’t laugh, it’s a serious question.

As Laurence Gonzales discusses in Deep Survival (DpSv review + notes) – a wonderful book we’ll delve into more later – even highly trained pilots have been known to do this (in simulations, thankfully).  If a plane shows up on the middle of the runway out of nowhere, they often completely neglect to notice the plane.

There are (obviously) real-world consequences of our tendency to make these kinds of oversights; we’ll explore some relayed by Dr. Jerome Groopman in How Doctors Think (HDT review + notes).

How many geniuses does it take to parboil a potato to make it roast faster?  Some extra time in the oven isn’t gonna hurt anyone; some extra time in developing a technology to end the deadliest military conflict in human history,… well, that’s another story.

The genius scientists of the Manhattan Project faced a challenging bottleneck: how to get enough naturally-occurring, inert uranium-238 enriched to fissile, bomb-ready uranium-235.  Several processes were eliminated because the scientists, as Richard Rhodes puts it in The Making of the Atomic Bomb” (TMAB review + notes),

“considered only those processes that enriched natural uranium all the way up to bomb grade.”  

Scientists eventually realized that they could use multiple processes, starting with thermal diffusion, which couldn’t enrich U-238 all the way but could much more quickly take care of the initial pass.  This oversight was later viewed by project heads Robert Oppenheimer and Leslie Groves as:

“a terrible scientific blunder […] one of the things I regret the most.”

Rhodes goes on to explain how the real bottleneck wasn’t uranium enrichment, but rather their schema – the scientists:

“thought of the several enrichment and separation processes as competing horses in a race.  That had blinded them to the possibility of harnessing the processes together.”  

Medicine.  Confirmation bias can cause intelligent, highly educated people to bend over backward to defend an irrational point of view: Meredith Wadman’s The Vaccine Race (TVR review + notes) provides an example of how National Institute of Health experts published a paper in 1963 discouraging the use of human diploid WI-38 cells for producing vaccines with the spurious logic of:

“there can be no absolute guarantee that a given strain of continuously cultured cells will never yield a previously unknown virus… that is infective and [disease-causing] for some cells… under some conditions.”

Of course, via inversion, the asteroid-sized holes in the logic are evident – there’s never a guarantee that any strain of cells will never develop any infection under any conditions – and as Wadman thoroughly establishes, those same scientists were conveniently ignoring the fact that the green rhesus monkey kidney cells then used for producing vaccines, were known to harbor the potentially cancer-causing SV40 virus.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our guided learning journeys, our discussion of the opportunity cost, product vs. packaging, or zero-sum vs. win-win games mental models, or our reviews of great books like “Ordinary Men” (OrdM review + notes), “The Landscape of History” (LandH review + notes), and “How Not To Be Wrong” (HNW review + notes).

Schema Mental Model: A Deeper Look

“Along with confirmation bias, the brain comes packaged with other self-serving habits that allow us to justify our own perceptions and beliefs as being accurate, realistic, and unbiased.  

Social psychologist Lee Ross calls this phenomenon “naive realism,” the inescapable conviction that we perceive objects and events clearly, “as they really are.””

– Carol Tavris and Elliot Aronson on page 42 of Mistakes were Made (but not by me)” (MwM review + notes)

As you might imagine, the way we perceive and process the world around us is so fundamental that it interacts with pretty much everything.  While interactions between the “ schema” mental model and others are scattered around PAA – for example, agencybottlenecksmemorylocal vs. global optimization, and  stress / humor – here we’ll delve deep into just the mechanics of schema.

The rest of the model is split into three sections.  

This book is particularly effective at driving home the realization that selective perception, confirmation bias, and other such phenomena are not "other people" problems - they're "all of us" problems, as they're hard-wired into human circuitry. Thankfully, with agency, probabilistic thinking, and structural problem solving, we can mitigate many of the negative effects thereof, and via inversion, use those effects to our advantage.
This book is particularly effective at driving home the realization that selective perception, confirmation bias, and other such phenomena are not “other people” problems – they’re “all of us” problems, as they’re hard-wired into human circuitry. Thankfully, with agency, probabilistic thinking, and structural problem solving, we can mitigate many of the negative effects thereof, and via inversion, use those effects to our advantage.

The first section: selective perception, identifying how we (subconsciously) filter out the vast majority of stimuli we’re exposed to, more or less without any agency involved, thereby potentially not even noticing important stimuli (like the pilots not noticing the plane).

The second section: confirmation bias / ideology / the eponymous “schema,” identifying how we semi-consciously fit the stimuli we do notice through the “filter” of our existing beliefs and worldview, thereby discarding potentially important stimuli we do notice (like the Manhattan Project scientists assuming thermal diffusion wasn’t a useful process for enriching uranium, because it didn’t conform to their belief of “useful process must take us from start to finish.”)

The third and final section: framing, or how, via agency, we can consciously present information to people in ways that will meaningfully influence the way they perceive, interpret, and respond to it.

A note on terminology: although “schema” technically refers to our set of beliefs and worldviews that act as a lens through which we perceive and interpret information, I use the term somewhat more loosely / generally than it might be formally defined in psychology, simply because what I give up in precision, I gain in utility by having a catch-all term to group the various phenomena by which we see… or don’t see… the world around us.

Selective Perception Mental Model

“the human brain receives eleven million pieces of information every second from our environment, [but] can process only forty bits per second, which means it has to choose what tiny percentage of this input to process and attend to, and what huge chunk to dismiss or ignore.”  

– Psychologist Shawn Achor in “Before Happiness” (BH review + notes)

 

The exact numbers are irrelevant, but the point remains valid.  At any given moment, there’s innumerable stimuli hitting our brains.   Achor’s Before Happiness as well as his previous outing The Happiness Advantage ( THA review + notes) are concise, witty books delivering an array of research-backed methods for reorienting our brains to perceive and respond to the world in a way that makes us likely to be happier and more successful.  I discuss some of his work extensively in the learned helplessness / growth mindset section of the agency mental model, so I won’t duplicate that overview here.

Yes yes, we'll get to the gorilla in a second. But this isn't the gorilla you're looking for. Everyone noticed this gorilla.
Yes yes, we’ll get to the gorilla in a second, I promise. It’s literally Samir’s First Rule of Psychology Books that any psychology book, by law, apparently must mention the marshmallow experiment, Milgram, or the gorilla experiment.  I guess that law applies to mental models too.  But Harambe is not the gorilla you’re looking for. Everyone noticed this gorilla.  This is, to wildly bastardize Stephen Covey, the “wrong gorilla.”

Try this exercise: ask how many different things you can see, hear, smell, taste, and feel right now.  I can see two different colors of paint of walls on the room I’m in, and there’s a glass of water to my left.  I’m wearing a maroon shirt and well-worn Levi’s that are a decade old with the holes to match. The most recent lyrics of the song that’s on right now go something like “as I’m looking to the sky to count the stars, I wonder if you see them where you are?”  

There’s a twinge in my right shoulder from too much typing and a breeze coming from the fan to my right.  An hour after dinner, my mouth still tastes vaguely like sweet potatoes roasted with rosemary, garlic, sage, and plenty of olive oil.

I haven’t even gone beyond the most noticeable stimuli.  If I take the time to notice all of the things around me, I’m never going to get my thoughts about schema down on paper, let alone make decisions about which small-cap stocks are worth investing in.  So, naturally, my brain does me a favor by automatically filtering out all of those irrelevant stimuli I just mentioned, so they don’t distract me from the task at hand: writing.

Except the problem is that we’re not always so good at defining what is relevant or irrelevant.  The classic psychology experiment on selective perception involves counting how many times the players wearing white pass the basketball.

Go do that now.  It’s tricky – everyone’s moving around a lot – so you’d better pay close attention.  Here’s the link.

… did you see the gorilla?

If you did, congrats!  You have received a clinical diagnosis of ADD and you need to go read Cal Newport’s Deep Work (DpWk review + notes) immediately, or get some more sleep, or read the activation energy mental model.  Or, y’know, preferably all three.

If you didn’t notice the gorilla, it’s okay.  Not only are you not alone, but you’re not even the most embarrassed test-taker here.  Megan McArdle has the most novel take of all in The Up Side of Down (UpD review + notes), where she chats with someone who, forget the flippin’ gorilla, didn’t notice that one of the basketball players was his own brother.  Talk about an awkward Thanksgiving.

Why do many of us fail to notice the gorilla?  As with most human cognitive quirks, it’s a feature, not a bug: it’s an  adaptive trait.  Laurence Gonzales explains on pages 79 – 81 of Deep Survival (DpSv review + notes):

“the implicit assumption is that you know what you’re doing and know what sort of perceptual input you want […]

such a closed attitude can prevent new perceptions from being incorporated into the model.

Gorillas are not helpful in completing the task [of counting the number of passes.  […] Gorillas are irrelevant and would displace the task in working memory.

So the brain, efficient system that it is, filters out the gorilla so that you can keep counting.  

Seeing the gorilla would be a mistake. You’d lose count.”

Of course, this sort of inattentional blindness is merely amusing when we’re sitting in front of our computers (or phones) in air- conditioning.  If it means we miss a bus while crossing the street (as Alex Rogo’s dad apparently did in The Goal – Goal review + notes – because he was too busy worrying), well, that’s bad.

If you’re wondering if this is purely theoretical, or something that intelligence and education exempt you from… it’s not.  On page 179 of How Doctors Think – HDT review + notes -Dr. Jerome Groopman mentions that framing an analysis as part of a routine physical led 60% of trained radiologists to not notice a missing clavicle (?!) in a chest x-ray. 

Think about that for a second: under certain conditions, the majority of highly intelligent, trained, educated, presumably motivated doctors whose job it is to notice abnormalities in x-rays don’t notice that someone’s collarbone is missing.  We only see what we’re looking for.

Gonzales notes that in critical situations, it’s important to keep perception narrow enough that we’re focused on the right variables, but broad enough that we don’t miss opportunities (or threats) that we’re not looking out for.

Taking our “lens” metaphor a step further, think about what the word “focus” literally means.  If your camera focuses on one thing – for example, the face of you and your friend in the foreground – then it can’t focus on everything else; the mountains and trees in the background are literally “out of focus.”

This metaphor applies to how we see the world, literally: Achor cites research in Before Happiness from Richard Nisbett that demonstrates that Westerners tend to focus on the “protagonists” of a landscape, while Asians tend to focus on the context.  I’m not really a fan of Nisbett’s book “Mindware’ (Mndwr review) – but it’s a thought-provoking experiment.

How can we broaden our perception to miss less?  Being aware of it is half the battle; structural problem solving approaches such as listing  counterfactuals can help us mitigate the ensuing  overconfidence.

One specific thought-provoking example comes from Dr. Jerome Groopman’s aforementioned How Doctors Think(HDT review + notes).  Groopman, on page 99, discusses the downside of “checklist” type approaches to diagnosis:

“Electronic technology can help organize vast clinical information and make it more accessible, but it can also drive a wedge between doctor and patient […] it also risks more cognitive errors, because the doctor’s mind is set on filling in the blanks on the template.  

He is less likely to engage in open-ended questioning, and may be deterred from focusing on data that do not fit on the template.”

The popular “The Checklist Manifesto (TCM review + notes) by Atul Gawande is a great book about structural problem solving, but I think many people who read the book have a tendency to overinterpret where checklists are – and are not – useful.  They are useful for solving  memory issues and for raising the  activation energy of making mistakes in routine procedures.

But in more complicated, open-ended situations, I find that going down a list can be counterproductive.  For example, I’ve had analogous experiences as an investor while interviewing CEOs and CFOs about their businesses.

Obviously, I always like to go in prepared with a list of specific questions about important issues, but I also find it useful to be able to “spitball” and go off script (sometimes, way off-script) if management says something interesting.

I’ve found that if I stick too closely to “interrogation” and trying to fill in answers to my specific questions (i.e. the doctor filling in the template), I often end up with a less useful total result than if I mix the two.

That’s merely one of a trillion examples of selective perception.  Read Deep Survival (DpSv review + notes) or Before Happiness (BH review + notes) for deeper understanding, or check out the stress/humor man-with-a-hammer, and  agency  mental models for ideas.

Application/impact: although we feel like we perceive everything we need to, the truth is that we only perceive a very limited subset of the stimuli the world is throwing at us.  Actively becoming aware of this, and instituting approaches to make sure we’re exposed to the stimuli we need to be exposed to, can be an effective countermeasure.

Confirmation Bias / Ideology / Schema Mental Model

“Democrats will endorse an extremely restrictive welfare proposal, usually associated with Republicans, if they think it has been proposed by the Democratic Party.  Republicans will support a generous welfare policy if they think it comes from the Republican Party.

Label the same proposal as coming from the other side, and you might as well ask people if they will favor a policy proposed by Osama bin Laden.”

– Tavris/Aronson (pictured at left) in Mistakes were Made (but not by me) ( MwM review + notes)

One of the more famous Charlie Munger quotes has to do with ideology:

“Another thing I think should be avoided is extremely intense ideology because it cabbages up one’s mind. …

When you’re young it’s easy to drift into loyalties and when you announce that you’re a loyal member and you start shouting the orthodox ideology out, what you’re doing is pounding it in, pounding it in, and you’re gradually ruining your mind.”

While the word “ideology” is usually understood in a political context – i.e., one’s position on hot-button issues like gun control, abortion, or immigration – the underlying mechanism is the same whether the question at hand is U.S. policy toward the Middle East, or the best NFL quarterback of all time.

When we believe something – i.e., hold a certain ideology – we have a tendency to filter all information we notice through the lens of that ideology, accepting that which fits, and discarding that which doesn’t.

This mechanism is explored in some depth by Dr. Judith Beck in Cognitive Behavior Therapy (CBT review + notes) through the lens of mental health.  One of her (anonymized or fictitious) patients, for example, felt bad because a friend hadn’t engaged in a conversation with her.

Of course, there could have been any number of possible explanations – her friend might have been busy, or distracted, or not even noticed her walking by – but Sally zeroed in on the explanation that confirmed her belief: that her friend didn’t really like her or want to be her friend/

This tendency is known as confirmation bias, which also encompasses our  habit of only searching out confirmatory evidence, rather than that which is disconfirmatory.  Various books examine this at length.

In addition to the aforementioned “Before HappinessMistakes were Made and Cognitive Behavior Therapy (all of which are great), three of my favorites are Richard Thaler’s “ Misbehaving (M review + notes), Philip Tetlock’s Superforecasting (SF review + notes), and the aforementioned Jerome Groopman’s How Doctors Think (HDT review + notes).

Richard Thaler’s “Misbehaving (M review + notes) provides the best example of confirmation bias of all.  In addition to being the far-and-away best book in the universe on the topic of cognitive biases and behavioral economics from a theoretical and applied point of view, it also has a storytelling angle, exploring how economists acted more like  humans vs. econs.

One of the consistent themes throughout the book is how most economists clung tenaciously to the rational-actor ideology – i.e., the premise that economic agents, i.e. humans, make perfect, optimal, rational decisions all the time – a premise that is obviously wrong (look around).

Sometimes the classical economists’ confirmation bias is vividly on explicit display: one unusually candid economist literally asked Thaler: if your newfangled theory is correct, what do I do?  I’ve spent my entire career figuring out how to do it the old way!

A less obvious example – but still funny – is Thaler’s discussion of the five-factor Fama-French CAPM efficient-market model for stock prices.  The factors, which are essentially “risks,” have been added to account for the fact that the market does not perform efficiently without them.  One of the new factors is “profitability,” leading Thaler to observe, dryly:


“it is difficult to tell a plausible story in which highly profitable firms are riskier than firms losing money.”
  

 

Similarly, one of the big themes in Groopman’s wonderful examination of how doctors come to the right diagnoses – or the wrong ones – is the tendency to zero in on a single explanation for a patient’s symptoms, instead of thinking probabilistically and acknowledging the role of multicausality.

Groopman notes, both with statistics and  vivid examples, that highly-trained doctors can still miss relatively easy diagnoses thanks to cognitive biases:

“inadequate medical knowledge was the reason for error in only four [of a hundred] instances […]

rather, [doctors] missed diagnoses because they fell into cognitive traps.”

Confirmation bias specifically pops up a lot, ranging from a quick example about misdiagnosed aspirin poisoning to his extended, heart-wrenching discussion of an adopted Vietnamese infant misdiagnosed with severe combined immunodeficiency (SCID).

One of the solutions, discussed more in depth in the  probabilistic thinking mental model, is to practice always generating  counterfactuals: as one doctor interviewed by Groopman recommends,

“even when I think I have the answer, to generate a short list of alternatives.”  

This isn’t just for doctors: Groopman thoughtfully discusses his own experience as a patient, and providers readers with a thoughtful approach to helping their doctors come to more accurate diagnoses, including asking questions like what else could cause the presenting symptoms, if there’s any disconfirming evidence in the current data or past history that counters the diagnosis, and if multiple causes (rather than just one) could be leading to the symptoms.

Philip Tetlock echoes a similar ideology in Superforecasting (SF review + notes), his fascinating exploration of how untrained, ordinary people beat expert predictions by following a specific thought process.

While the book covers far too much ground to be boiled down to a “talking point,” Tetlock does posit one slogan – which the world should adopt en masse.  I’ve bolded it because it’s super important:

“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.”

Of course, it’s worth mentioning that, as with most things, our tendency to filter the world through beliefs is generally adaptive.  Both because of the volume of new information and the associative nature of our memory, we’d never get anywhere if we didn’t filter consciously, just as we’d never get anywhere if we didn’t filter unconsciously.

Indeed,  Bayesian reasoning depends on having  probabilistic beliefs, many with a high degree of certainty.  If you see someone who claims they can make it rain, and then it starts raining, and they do that on five separate occasions, you still shouldn’t believe them, for reasons explained variously by Nate Silver in The Signal and the Noise (SigN review + notes), and Jordan Ellenberg in How Not To Be Wrong (HNW review + notes).

The mental models approach itself basically involves identifying maladaptive beliefs and replacing them with more adaptive ones so we can more accurately assess and respond to the world around us.

Application / impact: whatever information makes it through the filter of our attention and awareness (selective perception) must go through a second filter comprised of our beliefs and worldviews.  Being willing to replace untrue or maladaptive beliefs with more adaptive ones can be emotionally challenging, but is absolutely critical for effectively assessing and responding to the world.

Framing Mental Model (x Contrast Bias x Loss Aversion)

The last aspect of schema we’ll discuss – keeping it short and funny, because this has been long enough already – is the idea of “framing.”  As Richard Thaler explores in more depth in Misbehaving (M review + notes), presenting the exact same information in different ways can lead to people making different decisions.  I’ll go into more detail in mental models like loss aversion product vs. packaging,  saliencevividness, and  feedback.

For now, I’ll just present a fictitious letter that went viral decades ago and is a favorite among some psychology professors (which is where I encountered it originally.)

Dear Mom and Dad:

It has been three months since I left for college. I have been remiss in writing and I am very sorry for my thoughtlessness in not having written before. I will bring you up to date now, but before you read on, please sit down. Do not read  any further unless you are sitting down.

I am getting along pretty well now. The skull fracture and the concussion I got when I jumped out of the window of my dormitory when it caught fire shortly after my arrival are pretty well healed. I only spent two weeks in the hospital and now I can see almost normally and only get headaches once a day.

Fortunately, the fire in the dormitory and my jump was witnessed by an attendant at the gas station near the dorm.  He called the Fire Dept. and the ambulance. He also visited me at the hospital and, since I had nowhere to live because the dormitory burned down, he was kind enough to invite me to share his apartment with him. It’s really a basement room, but it’s kind of cute. He is a very fine boy and we have fallen deeply in love and are planning to get married. We haven’t set the exact date yet, but it will be before my pregnancy begins to show.

Yes, mom and dad, I am pregnant. I know how very much you are looking forward to being grandparents and I know you will welcome the baby and give it the same love and devotion and tender care you gave me when I was a child. The reason for the delay in our marriage is that my boyfriend has some minor infection which prevents us from passing our premarital blood tests and I carelessly caught it from him. This will soon clear up with the penicillin injections I am now taking daily.

I know you will welcome him into the family with open arms. He is kind and although not well educated, he is ambitious. Although he is of a different race and religion than ours, I know that you won’t mind.

Now that I have brought you up to date, I want to tell you there was no dormitory fire; I did not have a concussion or a skull fracture; I was not in the hospital; I am not pregnant; I am not engaged. I do not have syphilis, and there is no man in my life. However, I am getting a D in sociology and an F in science; and I wanted you to see these grades in proper perspective.

Your loving daughter,

Jane

TriviaKing at English Wikipedia [CC BY-SA 3.0], from Wikimedia Commons
As my psychology professor put it: she may be getting an F in science, but she’s getting an A in psychology!  She invokes both framing and contrast bias: well, some bad grades aren’t so bad compared to all that…

Framing pops up all over the place, from the classic Tom Sawyer fence-painting story to Dale Carnegie’s How to Win Friends and Influence People” (HWFIP review + notes).  Similarly, the aforementioned Shawn Achor discusses in The Happiness Advantage how re-framing your work from being a “job” to a “calling” can lead to much higher levels of satisfaction and engagement.

Thanks to loss aversion, it turns out that it’s usually more motivating to people to avoid a loss than to reduce a risk.  One example I saw somewhere (blanking on where) is a medical one. Framing a patient directive with “if you don’t take this pill, you’re twice as likely to have a heart attack” is more effective than “if you take this pill, you’re half as likely to have a heart attack,” even though they’re mathematically equivalent.

Application/impact: how we present information has a big impact on how it’s perceived.  Use mental models to understand how to present information in the manner in which it will be most effectively received.