If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.
Multidisciplinary Rationality + Mental Models: How It Works
The concept in one quote:In my whole life, I have known no wise people who didn't read all the time – none, zero. But that’s not enough: you have to have a temperament to grab ideas and do sensible things. Most people don't. - Charlie Munger Click To Tweet
The concept in one sentence: we can make more effective decisions by reading broadly to learn the most important and timeless concepts – “mental models” – and developing a full understanding of how they might interact in any given situation.
Key takeaways/applications: a substantial body of research suggests that anyone who ever makes decisions – a parent, a friend, a doctor, an executive, an investor, a student, or anyone else – will be happier, healthier, wealthier, and more effective/successful at whatever they choose to do if they follow the approach developed by Munger, and discussed in “ Poor Charlie’s Almanack” ( PCA review + notes). (Munger is Warren Buffett’s billionaire business partner at Berkshire Hathaway, and quite possibly the wisest man to walk American soil since Benjamin Franklin.)
Three brief examples of mental models:
Contrast bias and loss aversion. Humans are hardwired to notice changes; we’re especiallyhardwired to notice changes that affect us negatively. Nobel prize winner Richard Thaler – author of my favorite book, “ Misbehaving” ( M review + notes) – notes that:
Economists thought that fairness was a silly concept mostly used by children who don’t get their way.
It turns out that’s not the case: people’s sense of fairness can lead them to reject offers that would make them better off; in other words, it’s such a powerful mental model that it can even override incentives.
Yet many business executives – ranging from a former president of Coca-Cola to the executives of Netflix to Travis Kalanick, formerly of Uber – have made an elementary, and potentially deadly mistake, by ignoring this basic reality about human nature.
Unintended consequences ( n-order impacts). The world is a complex system with feedback effects, meaning that any intervention – whether as simple as the way you talk to your kids, or as complex as tax policy – will lead to unintended and potentially undesirable side effects. Attempting to carefully anticipate the second-order responses of the system to your intervention – and, if applicable, third-order, fourth-order, etc – will allow you to make more effective decisions.
Amidst this challenge, books like Philip Tetlock’s “ Superforecasting” ( SF review + notes) explore how by using a specific process of thought combining probabilistic thinking with disaggregation, ordinary individuals like you and me can make more accurate predictions than pedigreed, over-resourced experts – in their own fields.
Many other game-changing insights just like that are available to anyone willing to buy a few books and spend time reading, thinking about, and applying the concepts contained within.
Multidisciplinary Rationality + Mental Models: A Deeper Look
[… they] are often constructed from fragmentary evidence, with only a poor understanding of what is happening, and with a kind of naive psychology that postulates causes, mechanisms, and relationships even where there are none.”
That quote – from my second-favorite book, Don Norman’s “ The Design of Everyday Things” ( DOET review + notes) – happens to be about refrigerators and thermostats, and why they can be so gosh-darn hard to operate and understand.
But Norman’s insights on product design – as he himself points out – apply to far more than just products, because we’re all designing our lives and the way we do things. In fact, the aforementioned Richard Thaler called “ The Design of Everyday Things” the “breakthrough organizing principle” for his phenomenal other book, “Nudge” ( Ndge review + notes).
Nudge, written with coauthor Cass Sunstein, synthesizes Norman’s ideas about human-centered-design with Thaler’s research on cognitive biases like loss aversion, status quo bias, activation energy, and hyperbolic discounting. As I discuss in the structural problem solving mental model, Thaler used all of those models together to create a program called “Save More Tomorrow” that helped people triple the amount they were saving for retirement – without ever making a single cutback in their lifestyle.
Thanks to the power of compounding, that doesn’t mean people will have 3x the money saved at retirement.
It probably means they’ll retire with something like 10x the money they otherwise would’ve had.
Sound too good to be true? 10x more retirement savings with no sacrifice? It’s not too good to be true. It’s real, and it’s one of the reasons Thaler won the Nobel Prize.
That, in a nutshell, is mental models at work: by understanding the timeless, important concepts that underlie reality, we can make massively more effective decisions with zero opportunity cost.
Mental Models: Having The Right MapPoint of view is worth 80 IQ points. - Alan Kay Click To Tweet
One of the most impactful books on my personal development was Stephen Covey’s landmark “ The 7 Habits of Highly Effective People” ( 7H review + notes). In fact, the company Covey created – Franklin Covey – is Askeladden Capital’s largest portfolio position at this time, largely thanks to my understanding of mental models like discrete vs. recurring payoffs and status quo bias.
Many people believe that hard work is the key to success. And hard work certainly can lead to success – but only in certain circumstances.
“If you have the right map of Chicago, then diligence becomes important, and when you encounter frustrating obstacles along the way, then attitude can make a real difference.
But the first and most important requirement is the accuracy of the map… correct maps will infinitely impact our personal and interpersonal effectiveness far more than any amount of effort expended on changing our attitudes or behaviors.
Our attitudes and behaviors grow out of those assumptions. The way we see things is the source of the way we think and the way we act.”
Covey is referencing a mental model here – schema – the lens or “filter” through which we perceive the world and consciously or subconsciously process information. In fact, as I explore in the cognition / intuition / habit / stress mental model, if we make a habit of thinking about things the right way, it naturally becomes easier and easier to do that – freeing up cognitive resources to solve tougher problems.
What a lot of people don’t realize, unfortunately, is that hard work and effort are not universally good, but rather traits that are sometimes adaptive, sometimes maladaptive. Sometimes, hard work leads you closer to failure than success. “ Grit” kills, in more ways than one.
A literal example of Covey’s discussion of the importance of an accurate map can be found in the phenomenal “ Deep Survival” ( DpSv review + notes). Author Laurence Gonzales observes that when hikers get lost, they often have a tendency to “bend the map” – i.e., squint and scrunch and look at the map in just the right light, so they don’t have to admit they’re lost – and then what they do is press on even harder and faster.
By following their inaccurate mental map with great diligence and perseverance, hikers inadvertently get themselves into a double whammy of trouble – they get further lost, and they sacrifice sleep / rest, leaving them vulnerable to fatigue that not only saps their energy, but also distorts their cognition and decouples the rational prefrontal cortex from the emotional amygdala, leading them to make worse decisions.
This is why many successful people don’t just work, work, work all the time, but make sure to step back frequently and ensure their work is heading in the right direction – making sure, as Covey explains in “ The 7 Habits of Highly Effective People,” that we’re not wasting our time chopping down trees in the “wrong jungle.” For example, if we find ourselves in an arms race, no amount of hard work will ever allow us to win.
Yet this sort of sensible behavior is often discouraged by companies, thanks to product vs. packaging type errors. Many bosses and clients want to see their hard-earned money hard at work… whether or not that work is generating any value is often besides the point.
So, accurate maps are important. How do we create accurate maps with high utility? Historian John Lewis Gaddis makes a similar point to Covey in “ The Landscape of History” ( LandH review + notes), a great book about the challenging process of understanding history. Gaddis notes that all maps are abstractions – a 1-to-1 map would be useless, as it would be the territory itself – so it’s important to select the right details.
What are the right details? Gaddis argues, sensibly:
“There’s no such thing as a single correct map. The form of the map reflects its purpose.”
He provides the examples of a highway map not needing to note vegetation; he also notes that a globe wouldn’t work for a road trip, but would for a plane flight. He goes deeper into this idea in his astonishingly insightful and concise 300-page masterpiece “On Grand Strategy” (OGS review + notes), which distills historical lessons from Ancient Rome to Elizabethan England to WWII America:
Theory extracts lessons from infinite variety. It sketches, informed by what you need to know, without trying to tell you too much. For in classrooms, as on battlefields, you don't have unlimited time to listen. - John Lewis Gaddis Click To Tweet
Gaddis notes that such theory amounts to “sketches” that “convey complexity usably. They’re not reality. They’re not even finished representations of it. But they can transmit essential if incomplete information on short notice.”
We build these theories/sketches by “seeking patterns – across time, space, and status – by shifting perspectives.” (We’ll get to that idea, schema, in the next section.) But the ultimate point is to be able to:
“draw upon principles extending across time and space, so that you’ll have a sense of what’s worked before and what hasn’t. You then apply these to the situation at hand: that’s the role of scale. The result is a plan, informed by the past, linked to the present, for achieving some future goal.”
With that in mind, an important distinction between mental models learning and the type of reading you had to do in school is that mental models have utility. They’re useful. Chronobiologist Till Roenneberg – whose “ Internal Time” ( IntTm review + notes) is an underlooked but phenomenally important book – observes, insightfully:
“the drawback of traditional learning has always been the dissociation between the theory and its application. ‘Why do we have to learn this?’ is probably one of the most frequent and justified questions teachers hear.’”
If you ever have to ask me “why do I have to learn this?” – then I’ve failed you. There’s no reading tedious Jane Austen crap here. No memorizing pointless names and dates. No doing long division by hand. Mental models are maps that help you navigate the world you actually live in.
Poor Ash’s Almanack is a website designed to help you – yes, you, I’m looking at you specifically – learn what is most immediately useful and helpful to you. As Gaddis points out, a map is only useful in context, and depending on your own career, temperament, goals, and other factors, some models – and some books – may be more or less important to you now than to your friends, or to me, or to my friends.
Poor Ash’s Almanack provides three kinds of mental models content: a latticework of mental models, reviews/notes/analysis on books, and guided learning journeys to assist you in working your way through the first two.
Let’s delve a little deeper into the advantages of building a latticework of mental models – Covey’s proverbial “map of Chicago.”
“Adding Vantage Points” – Circle of Competence, Man With A Hammer, and Why Mental Models Work
Why do Yale medical students go to art museums?
No, it’s not a brain teaser, or the setup to a joke. (The punchline is leprosy… and also male menopause.) Positive psychologist Shawn Achor, and anyone who’s read his books, will understand the answer.
In his second book, “ Before Happiness” ( BH review + notes), Achor raises the idea of “adding vantage points” as a key to success; basically, this means stepping outside your own schema (worldview) to view problems from another angle.
Achor explains that rigorous scientific research:
“shows that a [schema] based on only one vantage point is limited and full of blind spots.”
Charlie Munger frequently refers to this “one-vantage-point” approach with the classic quip: to a man with a hammer, the world looks like a nail.
Munger cites, as an example, a gallbladder surgeon who – due to incentives, but also due to his sole focus on gallbladders – decided that gallbladder surgery was an absolute necessity for everybody in every context.
It may be apocryphal, and you may be laughing. But it’s a real thing doctors do: in the culture / status quo bias model, I cite a story from David Oshinsky’s medical-history “ Bellevue” ( BV review + notes) about an 1870s surgeon who decided circumcision was the miracle cure-all for ailments as varied and impressive as “club foot, epilepsy, and serious mental conditions.”
Experiments proved… well, let’s charitably call them “unsupportive of the hypothesis.” But just like Munger’s gallbladder surgeon, this 1870s surgeon proceeded undeterred, a man with a hammer – a pair of scissors, in this case – and, thanks to the astonishing cross-generational power of status quo bias, that’s why routine infant circumcision remains prevalent in the United States today, a century and a half later. Snip snip.
Doctors haven’t grown out of overconfidence with better medical education, either. (Please note that I’m not picking on doctors because I have it out for them – in fact, it’s the opposite: I greatly admire them and the work they do, so I’ve read a fair number of books about medicine and medically-oriented science, so I have a lot of examples from the field of medicine that are topical.)
Dr. Jerome Groopman observes in the marvelous, truly insightful “ How Doctors Think” ( HDT review + notes) that many people – doctors included – assume that specialization automatically lends, or removes the need for, multidisciplinary worldly wisdom of the sort that Munger discusses.
Au contraire. Groopman quotes Dr. Eric Cassell’s Doctoring: The Nature of Primary Care Medicine:
“One should not confuse highly technical, even complicated, medical knowledge… with the complex, many-sided worldly-wise knowledge we expect of the best physicians.”
“Specialists take care of difficult diseases, so, of course, they will naturally do a good job on simple diseases. Wrong. […]
People used to doing complicated things usually do complicated things in simple situations – for example, ordering tests or x-rays when waiting a few days might suffice – thus overtreating people with simple illnesses and overlooking the clues about other problems that might have brought the patient to the doctor.”
Groopman’s ‘ How Doctors Think” ( HDT review + notes) is a fascinating, mental models rich book that explores how doctors can – and do – integrate multidisciplinary wisdom, including thought processes like multicausality and probabilistic thinking, while avoiding cognitive biases like recency bias and the potentially deleterious interactions between incentives and reciprocity bias.
One important lessons from mental models learning is to avoid overconfidence: by inversion (the process of thinking backwards), you could have deduced the conclusions of the second quote from the first quote. In other words: that just because specialists might be an expert at one niche of medicine – endocrinology, oncology, cardiology, or so on – that doesn’t mean they’ll be a great general practitioner.
And it certainly doesn’t mean that they’d automatically be particularly knowledgeable, or able to address, issues outside of their area of expertise. Yet, like specialist doctors, many of the rest of us – particularly highly intelligent, educated people – tend to overestimate our circle of competence, or the breadth of topics on which they can opine.
It’s important to know the limits of our circle of competence – and apply a margin of safety to those limits – because it protects us from making bad decisions that hurt ourselves, or those we care about.
It’s also important to know the base rates that apply in any given situation – that is, the prevalent statistical likelihoods of any given scenario. For example, while many white-collar professionals think they can get by on 6 hours of sleep, researcher Dr. Matthew Walker points out that you’re more likely to be struck by lightning than to be able to do alright on 6 hours of sleep.
The base rate is that if you’re sleep deprived (which 70% of Americans are), your prefrontal cortex (the rational part of your brain) is partially decoupled from your amygdala (the emotional part of your brain), and it may even be in an “offline, disabled state” for much of your workday.
It doesn’t take Munger’s IQ to figure out that learning and making better decisions is probably easier if the rational part of our brain isn’t offline. Yet this is how a great number of American executives, Wall Street financiers, doctors, lawyers, and students are operating today.
Over time, learning mental models helps us expand our circle of competence, and our database of base rates.
Timeless Principles, Internal Consistency, and Open-Mindedness
One of the important models to understand is discrete vs. recurring payoffs. The mental models approach focuses not on today’s facts or fads that will be irrelevant tomorrow, but rather timeless principles that apply across time.
“The reality of [natural principles or laws] becomes obvious to anyone who thinks deeply and examines the cycles of social history.
These principles surface time and time again, and the degree to which people in a society recognize and live in harmony with them moves them toward either survival and stability or disintegration and destruction.”
It is important, of course, to understand trait adaptivity – and separate “practices” from “principles,” as Covey puts it. As John Lewis Gaddis explains, via a scholar named “Collingwood,” in the aforementioned “ The Landscape of History” ( LandH review + notes), we have to be careful not to mistake:
“the transient conditions of a certain historical age for the permanent conditions of human life.”
Mental models are a search for those permanent conditions. Practices apply in specific situations: for example, if a defensive back is in press coverage, a wide receiver might use a certain technique to evade that coverage – but if the DB figures that out and responds to it, the practice will no longer work.
The underlying principle, however, might be something timeless like “footwork is the foundation of route-running.” Know where to place your feet, and you’ll always have a good shot at getting open.
The difference is important. For example, it doesn’t make any sense to communicate the same way with your doctor as with your boss or your eight-year-old niece or nephew. We talked about the dangers of being a man with a hammer.
As explored in trait adaptivity, the “strength” or “weakness” of many behaviors or qualities is heavily circumstance-dependent. But one underlying principle of good communication is empathy– understanding the other person’s schema – and that principle will drive different practices in different contexts.
“you should have the same code of conduct in life and business.”
One important tenet of rationality is trying to fit all the models together in a way that makes sense. Munger often calls life “one damn relatedness after another” – and the models start to get really powerful when, unlike the man with a hammer, you can apply a full toolkit of them to any given problem.
“You must know the big ideas in the big disciplines and use them routinely – all of them, not just a few. Most people are trained in one model – economics, for example – and try to solve all problems in one way. This is a dumb way of handling problems.
What you need is a latticework of mental models in your head. And, with that system, things gradually get to fit together in a way that enhances cognition.”
“[it is] very vital to put together ideas to try to enforce a logical consistency among the various things that you know.”
“the thing that doesn’t fit is the thing that’s most interesting, the part that doesn’t go according to what you expected… [the laws] sometimes look positive, they keep on working and all of a sudden some little gimmick shows that they’re wrong, then we have to investigate the conditions under which [the exception occurred], and gradually learn the new rule that explains it more deeply.”
One of my misfits: how can fundamental attribution error and impostor syndrome occur simultaneously, in the same person? Not sure yet.
One hallmark of many forms of irrational thinking – pointed out by authors ranging from Megan McArdle on conspiracy theories in “ The Up Side of Down” – UpD review + notes, or Tavris/Aronson on Freudian psychoanalysis in “ Mistakes were Made (but not by me)” – MwM review + notes – is that it’s not falsifiable, because the argument shifts and any inconsistencies are hand-waved away.
There’s strong commitment bias to the theory, notwithstanding the facts.
Where irrationality prevails, this sort of logical inconsistency often lurks not far behind. For example, anti-vaccination conspiracy theories – themselves a function of feedback x n-order impacts x salience, as I explore in the salience mental model – are one of the most obvious examples of profound irrationality.
Decades of heavily-scrutinized research prove beyond reasonable doubt that vaccines are one of the safest and most cost-effective medical interventions in human history, and recent outbreaks of previously-controlled diseases like measles and pertussis demonstrate the dangers of skipping childhood immunizations.
“[Jenny] McCarthy [a former Playboy model and prominent anti-vaccine advocate] later undercut her stop-injecting-toxins-into-our-bodies message by saying, “I love Botox. I absolutely love it.””
Similarly, one couple sought an exemption for their son because they argued that vaccination “represented an unwanted intrusion into the[ir son’s] body.”
Declining to go along with the request, a court pointed out that the parents “had circumcised [their] son and allowed dentists to fill [his] cavities.”
That is a clear failure of internal consistency.
It is, unfortunately, all too common.
Worse yet, as Mark Twain once put it:Nothing so needs reforming as other people's habits. - Mark Twain Click To Tweet
Psychologists Carol Tavris and Elliot Aronson explore, in the wonderful “ Mistakes were Made (but not by me)” ( MwM review + notes), exactly how and why it’s often very easy to identify others’ mistakes… and hard to notice our own. Not impossible, thankfully – but it requires effort, attention, and thoughtfulness.
So, there you have it. Mental models aren’t complicated – but they aren’t easy, either. Everyone’s busy, and it takes a lot of time and effort to read and think and synthesize.
I’m hoping to save you a lot of that time and effort with the roughly half a million words spanning the latticework of mental models, reviews/notes/analysis on books, and guided learning journeys I’ve put together – but you still need to put the time and thought in to read the books, deeply understand the models, and then apply them in your career and personal life.
What are you waiting for? Go do it!