If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.
Structural Problem Solving / Choice Architecture Mental Model: Executive Summary
If you only have three minutes, this introductory section will get you up to speed on the structural problem solving mental model.
The concept in one quote:So often the problem is in the system, not in the people. If you put good people in bad systems, you get bad results. - Stephen Covey Click To Tweet
Key takeaways/applications: Too often, people mount heroic defenses against symptoms when it would be much easier, cost-effective, and time-effective to address causes. Designing better systems or structures – what Sunstein/Thaler call “choice architecture” – can yield huge payoffs in personal life and public policy alike.
Three brief examples of structural problem solving:
Boys will be boys. Drivers frequently wiped out on one of the tight curves of Chicago’s Lake Shore Drive. As Cass Sunstein and Richard Thaler explain in Nudge ( NDGE review + notes) – a phenomenal book about “structural problem solving” solutions like opt-ins and feedback – the city implemented fake speedbumps that drivers responded to as if they were real, thanks to selective perception. S/T note many other examples. Most famously: strategically adhering stickers of flies to certain spots on urinals can help airports, stadiums, and other public venues cut down meaningfully on “spillage,” improving user experience and offering huge return on investment from foregone cleaning.
Performance in critical situations. As Laurence Gonzales explores in “ Deep Survival” ( DpSv review + notes), stress can cause emotion to overrule cognition, leading to critical mistakes in life-or-death situations that seem obvious and correctable with the benefit of hindsight bias. Pilots have been known to forget to “FLY THE PLANE” in critical situations, and surgeons have been known to operate on the wrong side of the body. So as we’ll explore, checklists – as Dr. Atul Gawande highlights in “ The Checklist Manifesto” ( TCM review + notes) – can serve as a structural problem solvingsolution to our faulty and more-or-less unimprovable memory.
An investment like financial capital. Habits – espoused over the centuries by books ranging from “ The Autobiography of Ben Franklin” ( ABF review + notes) to “ The Power of Habit” ( PoH review + notes) – serve as a powerful structural problem solution, lowering the activation energy of desired behaviors.
If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.
However, if this doesn’t sound like something you need to learn right now, no worries! There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our discussion of the Bayesian reasoning, 80/20, or social connection mental models, or our reviews of great books like “ The Halo Effect” ( Halo review + notes), “ Onward” ( O review + notes), or “ 10% Happier” ( 10H review + notes).
Structural Problem Solving / Choice Architecture Mental Model: A Deeper Look
We should treat all failures in the same way: find the fundamental causes and redesign the system so that these can no longer lead to problems.
It is not possible to eliminate human error if it is thought of as a personal failure […]
If the system lets you make the error, it is badly designed. And if the system induces you to make the error, it is really badly designed.”
That stunning insight, from Don Norman in “ The Design of Everyday Things” ( DOET review + notes), is probably my favorite quote of all time. “ The Design of Everyday Things” is my second-favorite book of all time and I can’t recommend it highly enough.
It’s worth noting that my favorite book, “ Misbehaving”by Richard Thaler ( M review + notes), cites DOET as an inspiration. Norman, pictured at right, is a luminary genius whose work has a powerful resonance far, far beyond the field of design.
One core premise shared between DOET and Thaler’s “ Nudge” ( Ndge review + notes) with coauthor Cass Sunstein is we’re either part of the problem, or we’re part of the solution. Norman notes that we’re constantly designing – from our lives to the way we do things.
Similarly, Sunstein/Thaler similarly note that given the power of default options (see status quo bias), it’s impossible to create systems that don’t influence users’ behavior in some way. Just like architects create spaces for people to live, work, and play in, we’re all “choice architects” creating mental spaces for ourselves and others to make decisions.
Even decisions as minor as what food we put on what shelf of the fridge can meaningfully impact our eating choices (and those of our children). Like it or not, we’re full-time designers – constant choice architects.
Given that reality, there’s only one reasonable approach:Let's decide to be the architects - the masters of our fate. - Rise Against Click To Tweet
The idea of structural problem solving should be intuitive and obvious: if users tend to accidentally close their documents without saving, redesign the interface so that it’s hard to close the document without saving it – what Norman calls a “forcing function”:
Why don’t we use them more often in our everyday lives? It’s a mystery. Part of the reason is that there may be a cultural norm of viewing the “easy way” as less righteous than the “hard way” – for example, as I discuss in the willpower model, the horribly maladaptive “ grit” mindset is bizarrely and inexplicably popular right now.
Let’s jump right into the interactions with a practical look at a phenomenal structural problem solving approach: Richard Thaler’s “Save More Tomorrow” plan.
Structural Problem Solving / Choice Architecture x Willpower x Hyperbolic Discounting x Loss Aversion x Activation Energy x Trait Adaptivity Status Quo Bias x Inversion
In “ Poor Charlie’s Almanack” ( PCA review + notes), Charlie Munger discusses preventing scurvy on ships as an elementary application of psychological principles. Knowing sailors wouldn’t eat foreign, yucky, vitamin-C containing sauerkraut if it was forced on them, officers instead pulled the Tom Sawyer trick – painting it as a scarce and desirable item, an officers’ privilege. When sailors were eventually “allowed” to eat it, they ate it.
Munger applauds this approach; Berkshire Hathaway implicitly utilizes structural problem solving and choice architecture.
Many real-world problems, of course, are much larger in scale and non-trivial to approach: for example, there are a lot of psychological phenomena – hyperbolic discounting, loss aversion, activation energy, and status quo bias – that get in the way of people saving for retirement.
It is, of course, worth remembering the models of inversion and trait adaptivity: cognitive biases are not necessarily good or bad. They’re generally- adaptive traits that exist because they’re more good than bad on average, but in certain situations, they can be bad – sometimes very bad.
And yet by inversion, we can utilize them to our advantage. Take contrast bias, for instance: many times we’re unhappy because we tend to process the world through comparisons, and we might compare our life with an idealized version of what we want, and thus feel let down.
But we can flip this the other way around. In “ The Happiness Advantage” ( THA review + notes), Shawn Achor asks you to visualize being shot in the arm during a bank robbery. You have two choices: one, you can bemoan “oh, why me, out of anyone in the bank / in this world” (in which case you’ll be sad). This would not be Munger-approved (he hates self-pity.)
The more adaptive option is to use contrast bias by inversion: focus on thinking, “wow, I’m lucky the bullet didn’t hit any major arteries, and I’ll be back on my feet in a few weeks.” You could, paradoxically, feel profoundly grateful – which, indeed, many people do feel after objectively horrible life events, as Megan McArdle explores in her thoughtful book on failure, “ The Up Side of Down” ( UpD review + notes).
With that premise, how would a smart choice architect go about using structural problem solvingand the aforementioned mental models to attack the retirement-savings problem? Use inversion to transform the factors that are stopping people from saving into factors that keep people saving?
Richard Thaler and Cass Sunstein explore this beautifully in “ Nudge” ( Ndge review + notes), which provides tons of other examples of structural problem solving that Don Norman appreciated (he gave a thumbs-up review.)
But in my opinion, Thaler’s presentation upon winning the Nobel Prize provides the best, most concise explanation of the thought process. The whole video is worth watching, but here’s a time-synced link to 29:17, when he explains Save More Tomorrow as follows:
“What we decided to do is think about this starting with the psychology and ask, well what is it that’s preventing people from saving enough?
One is self-control problems. We can go out for a fancy dinner tonight… that’s tempting. Whereas saving for retirement, that’s sometime off in the future. And we know people have more self-control for the future than for now: many of us are planning diets… not this week, maybe after the New Year.
The second is loss aversion: people don’t like to see their income go down.
So these three things are preventing people from saving. Let’s flip the problem around and use those to create a plan that will take those weaknesses if you want to call it that, and use them to help.
So the plan we created, we called Save More Tomorrow… to invite people now to save more later, because self-control is easier for later. And particularly, to invite them to save more when they get their next raise, so they won’t see their income go down. That will eliminate the loss aversion.
And then we’re gonna keep that up until they hit some goal, so we’ll get inertia working for us.“
Norman’s fingerprints are all over this plan – that’s not to take anything away from Thaler, of course (I think he’s amazing). But you can see why Thaler calls DOET the “breakthrough organizing principle” for Nudge. Both books are indispensable and you should read them – repeatedly.
All in all, just how effective was this? Here’s a screengrab from the presentation: Save More Tomorrow quadrupled participants’ savings rates, which eventually rose to over 2x that of participants who declined to receive any help.
This is the best proof I can offer that structural problem solving / choice architecture – and more broadly, a thorough understanding of mental models – is the best way to solve hard problems. Munger talks about how you get a “ lollapalooza” – a nonlinear, exponential type response – when you stack mental models. And this is a great example.
Even small annoyances – such as having to spend ten minutes looking through and filling out some paperwork – can cause people to forego major benefits. For example, they observe in Nudge that many seniors weren’t receiving free Medicare benefits they were entitled to – probably in part because some didn’t know about them, but also likely because the Medicare website was, well, let’s just call it “not-Don-Norman-approved.”
Sunstein and Thaler themselves are not immune to this phenomenon, as they point out in Nudge. Anyway, Save More Tomorrow made it easier for employees to enroll.
Contrast the effectiveness of the structural problem solving approach here with the ineffectiveness of the idiotic “ grit” approach espoused by so many (see the willpower mental model). We have to be cognizant that we’re all humans, not econs, as Thaler’s research (and that of others) demonstrates. Megan McArdle discusses in “ The Up Side of Down” ( UpD review + notes) how even highly educated, intelligent, high-income professionals can fail to save enough. So why aren’t more companies implementing systems to help them do so?
Again, I can’t recommend “ Nudge” ( Ndge review + notes) highly enough – there are plenty of other examples of structural problem solving (what Thaler calls “nudges.”) For example, the way that habit works – see Duhigg’s “ The Power Of Habit” ( PoH review + notes) – it’s easier to remember something we do every day. So patients who, for whatever reason, need to take a pill every two days, or twice a week, should be given a pillbox with placebos in the non-active days.
Patient compliance with medicines is a hard problem, as Megan McArdle points out in the aforementioned “ The Up Side of Down” ( UpD review + notes). We can keep our planes flying in the air at 30,000 feet, but we can’t get people to take their medicine – a shame, considering that there are many medical conditions we can’t yet treat effectively; we might as well get full mileage out of the treatments we do have. (McArdle’s discussion of the Hawaiian parole system is another great example of structural problem solving.)
So take the mental models here on this site, and in the books I recommend, and that you encounter and synthesize on your own, and – using the tools in “ Nudge” ( Ndge review + notes) and “ The Design of Everyday Things” ( DOET review + notes) – go out there and solve some hard problems.
Application / impact: structural problem solving / choice architecture eats grit for breakfast, lunch, and dinner. It’s no contest. Whenever you face a hard problem in your life, forget about willpowerand instead look for the structural solution to render willpower unnecessary.
Structural Problem Solving x Dose-Dependency ( Nonlinearity) x N-Order Impacts: Why Technotopia Isn’t Always The Answer
Alright, time to burst my own bubble.
“What?!?!?!” I hear you asking. “Isn’t structural problem solving the best thing since sliced bread?”
Well, yes, it is. But you have to be careful.
Let’s go back to Munger:
What I’m against is being very confident and feeling that you know, for sure, that your particular intervention will do more good than harm given that you’re dealing with highly complex systems wherein everything is interacting with everything else.”
While I don’t have a full model on complexity / emergence because it’s still an area I’m exploring, part of the challenge of the world is that there are always more consequences and follow-on effects to our actions than we can reasonably predict.
So we have to be careful – judicious – not to be bulls in a china shop. This applies even with structural problem solving. There are a lot of potential examples we could use, but we’ll focus on the area of medicine, which I’ve always found fascinating.
In the aforementioned ‘ The Up Side of Down” ( UpD review + notes) – a great book – the thoughtful and analytical Megan McArdle touches on the issue of medical errors (which she’s written about extensively outside of the book, as well). She examines one reason why, like saving for retirement, consistently washing our hands can be so hard to do:
“As I discovered when I myself had to spend ten days administering IV antibiotics at home, the reason that handwashing is so hard to do consistently is that it’s not actually that risky to forgo it. The odds that any one slip will cause an infection are extremely low, well under 1 percent.
And since it’s tedious and often must be done multiple times while touching a single patient, it’s very tempting to skip it sometimes. Over thousands of repetitions, this kills people.
But most of us don’t judge our actions over thousands of repetitions.”
Setting aside the luck, probabilistic thinking, feedback, and salience elements here (which I discuss elsewhere), it’s an interesting challenge. Doctors obviously understand Germ Theory on an intellectual level – they didn’t always, as David Oshinsky explores in “ Bellevue” ( BV review + notes) – but no credible doctor today would intentionally forego washing their hands if they thought it was a risk.
Enter checklists, explored thoroughly (and fantastically) by Dr. Atul Gawande’s “ The Checklist Manifesto” ( TCM review + notes). Checklists work very, very well for a specific type of problem, as Gawande explains: they are a structural problem solving device for our faulty memory, ensuring that we do things like “FLY THE PLANE.”
Pilots forget to do that, sometimes, if they’re distracted by firefighting other errors. Similarly, Gawande’s team found that one of six critical surgery safety steps was missed two-thirds of the time in the operating theater – clearly, a systemic solution was needed, and checklists proved to fit the bill.
But what Gawande drives home in this book – a point that’s missed by many, especially in the investing community – is that checklists aren’t the end-all be-all solution to everything. In fact, Gawande specifically explains on pages 128 and 120, respectively, that checklists are:
“not comprehensive how-to guides”
And:Checklists are supposed to turn people's brains on, rather than off. - Dr. Atul Gawande Click To Tweet
Yet in the real world, they aren’t always used the way Gawande advises – particularly in investing, where there’s an inappropriate fascination with them (given that most investing mistakes are conceptual rather than technical in nature), but even in medicine.
We’ve all had that experience where a customer service rep, working from an overly-stringent policy, refuses to use a little common sense because the policies serve as almost-literal blinders.
Sadly, that can turn up in much more critical situations as well. For example, discussing decision trees and algorithms ( disaggregation), Dr. Jerome Groopman notes in the wonderful – and underlooked – “ How Doctors Think” ( HDT review + notes) that such algorithms:
“quickly fall apart when a doctor needs to think outside their boxes, when symptoms are vague, or multiple and confusing, or when test results are inexact.
In such cases […] algorithms discourage physicians from thinking independently and creatively. Instead of expanding a doctor’s thinking, they can constrain it.”
The same goes if doctors are merely using templates (checklists of a sort) to record symptoms rather than make a diagnosis.
“Electronic technology can help organize vast clinical information and make it more accessible, but it can also drive a wedge between doctor and patient […]
it also risks more cognitive errors, because the doctor’s mind is set on filling in the blanks on the template.
He is less likely to engage in open-ended questioning, and may be deterred from focusing on data that do not fit on the template.”
I’ve observed similar phenomena in my own work as an investor: when interviewing management teams, if I’m too focused on asking and answering a preset list of questions, I may miss important insights that come up organically in the conversation. I find that there’s a happy medium between being prepared and ensuring I touch on all important issues, while also leaving room for “spitballing” on interesting topics and engaging in open-ended conversation.
Of course, Groopman’s book is a decade old at this point, and technology has progressed quite far since then – and there are counterexamples; Nate Silver’s “ The SIgnal and the Noise” ( SigN review + notes) does a great job of examining several fields, such as meteorology, where computer power and human brainpower proved synergistic, with humans and computers together making more accurate forecasts than either could separately.
Nonetheless, hopefully it drives home the idea that overconfidence is bad and careful consideration of n-order impacts is required whenever we’re attempting to change the behavior of a system in any way.
A/B testing and control groups would seem to come into play here.
Application / impact: more of a good thing is not always better; all actions have unintended consequences and the bigger the scope of our changes, the more likelihood there is that we’ll come up against something meaningful we didn’t anticipate. Humility and caution are the name of the game.