If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.
Bottleneck / Limiting Reagent Mental Model: Executive Summary
If you only have three minutes, this introductory section will get you up to speed on bottlenecks, limiting reagents, and weakest links.
The concept in one sentence: the performance of an entire system is often limited by its weakest or most constrained part; if you have only one measly tablespoon of flour, you can’t bake a pizza even if you have a fridge full of heirloom tomatoes and farm-fresh mozz.
Key takeaways/applications: in situations where bottlenecks apply, you can spend as much time as you want improving non-bottlenecks and it won’t improve system performance. Following the 80/20 principle, focusing on unconstraining bottlenecks/limiting reagents or tackling the weakest links usually has a very high return on time and effort invested.
In real-life terms, if it’s cold outside and you’re already wearing a jacket, gloves and a hat are the next smart move. For many people who are technically skilled, empathic communication, is often a bottleneck for progressing in their careers.
The captain has turned on the Fasten Seat Belt sign. Weak links are of particular interest in structural engineering: a structure that is 99.99% sound can still be toppled by the 0.01% that isn’t sound.
As discussed on pages 178 – 180 of Henry Petroski’s wonderful To Engineer is Human, for example, the de Havilland Comet aircraft suffered several mid-air disintegrations due to weakness around rivet holes in window openings. (See the TEIH review + notes, and the margin of safety mental model.) Once these areas were redesigned for strength, the plane became structurally sound.
Wait, there’s only one gas station on the Dalton Highway? Limiting reagents are also a core concept in chemistry: the potential output of a chemical reaction is always limited by the most constrained input.
For example, a car’s internal combustion engine can be thought of as a reaction between a finite supply of hydrocarbons from your fuel tank (gasoline/diesel, the limiting reagent) and the unlimited oxygen from the air (the “excess” reagent). Notwithstanding that there’s infinite oxygen, if you run out of gas, your car won’t go.
For want of a nail. Similar premises apply in most manufacturing environments; industrial companies often focus their capital expenditures on “debottlenecking” operations, because if a “widget” is made of one part each A, B, and C, it doesn’t matter if you have the capacity to make fifty parts each of B and C if you can only make ten parts of A.
Sometimes, the lack of one technology can also serve as a “bottleneck” for another; Gregory Zuckerman’s excellent The Frackers (FRK review + notes) provides a great example by discussing how hydraulic fracturing, or “fracking” – a technique that had been around for quite a while – couldn’t reach its full potential until, in the late ‘90s and into the 2000s, oilfield technologists began to perfect the art of drilling long horizontal wells in thin layers of hydrocarbon-rich shale rather than traditional vertical wells that barely intersected with the oil-containing zones for a small portion of their length.
If this sounds interesting/applicable in your life, keep reading for deeper understanding and unexpected applications.
However, if this doesn’t sound like something you need to learn right now, no worries! There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of the base rates, incentives, or probabilistic thinking mental models, or our reviews of great books like Superforecasting ( SF review + notes), Made in America ( WMT review + notes), and Uncontainable ( UCT review + notes).
A Deeper Look At The Bottlenecks / Limiting Reagent / Weakest Link Mental Model
so… I’ve been sleeping on your floor.
And I hate your floor.
but I’ll sleep there anyways.”
– Microwave, “Trash Stains” (off “Stovall”)
I think the concept of a bottleneck / limiting reagent / weakest link is generally pretty self-explanatory, so rather than waste your time with a more in-depth explanation of it alone, we’ll dive right into how it interacts with other mental models in some unusual and interesting ways.
Bottlenecks x Schema
Most people think of bottlenecks and limiting reagents in purely physical terms; i.e., the literal inability of a highway interchange to accommodate more cars during rush hour (an example we’ll touch on momentarily).
In contrast, I think some of the most interesting bottlenecks are actually mental – cases of our schema (the often-subconscious worldview that acts as a lens through which we perceive our world) getting in the way of our ability to accurately understand and respond to reality.
To quote a book I loved growing up, “if you are facing a contradiction, check your premises.”
Here’s a couple great real-life examples from some people way smarter than me. In American scientists’ efforts to conquer polio, there were some literal technological bottlenecks: viruses such as polio couldn’t be fully studied until scientists learned how to grow it in the lab.
Since viruses require living tissue to grow in, scientists first had to learn how to keep animal or human cells alive in petri dishes (so they could then infect them with viruses).
However, even after it had become technologically feasible to grow poliovirus cultures – i.e. the technological constraint had been de-bottlenecked – what I call a schema bottleneck prevented scientists from progressing with research.
As discussed in Meredith Wadman’s The Vaccine Race (TVR review + notes) and David Oshinsky’s Polio: An American Story (PAAS review + notes), early on in the process of studying polio, a respected scientist had made two critical mistakes.
First, by testing poliovirus on a monkey incapable of contracting it orally (i.e. administered through the mouth), he came to the overly broad and incorrect conclusion that polio could not propagate if ingested.
Second, by utilizing (and, in some senses, creating, via trait adaptivity) a strain of poliovirus that it turns out only multiplies in nerve tissue rather than other types of tissue like muscle tissue, he dramatically slowed down efforts to create a vaccine – because nerve tissue can create allergic reactions if used in vaccines, so a vaccine could only be produced from poliovirus grown in other types of tissue.
Other scientists took his work so seriously that they never bothered to reconsider whether a different strain of polio could be grown in nerve tissue, or whether a vaccine (with antibodies circulating in the bloodstream) would be a feasible solution (which it might not be if polio went straight to the nervous system without hitting the bloodstream).
Those two books are chock-full of other great examples of schema bottlenecks, including regulators’ and American scientists’ myopic refusal to consider using human diploid cells rather than the traditionally-used monkey kidney cells for vaccine production.
But they’re hardly the only examples; in fact, examples are so prevalent that famous physicist Richard Feynman referred to humanity’s unique ability to pass on this sort of mistaken knowledge from generation to generation as a “disease” (although, of course, the ability to pass on knowledge is usually adaptive.) We discuss this in the culture / status quo bias mental model.
In fact, another example – one of my favorites – for which Feynman and some other even smarter scientists were present is relayed in Richard Rhodes’ The Making of the Atomic Bomb (TMAB review + notes), the definitive book about the Manhattan Project during World War II.
The challenging process of uranium enrichment was a literal physical bottleneck for bomb production, but physical debottlenecking of uranium production was slowed down by a mental schema bottleneck. As Rhodes puts it eloquently, the scientists:
“thought of the several enrichment and separation processes as competing horses in a race. That had blinded them to the possibility of harnessing the processes together.”
The scientists and engineers were focused on trying to use a single process to enrich uranium all the way from its natural state to the state needed for use in an explosive device, but – failing to apply multicausality via inversion – they never thought of using two separate processes, one to quickly partially enrich a lot of uranium, and the other to finish the enrichment all the way to the end.
According to Rhodes, the technical director of the Manhattan Project, Robert Oppenheimer (affectionately, “Oppie”) referred to this schema bottleneck as a “terrible scientific blunder” and Leslie Groves, responsible for overseeing everything, called it “one of the things I regret the most.”
If nuclear physicists and world-famous virologists can allow their worldview to bottleneck their effectiveness, it seems pretty likely that you and I do that all the time (I can speak for myself, at least – my entire focus on mental models is, in a sense, a way to debottleneck my schema.)
Application/impact: how can we apply the interaction of schema and bottlenecks in our lives? Pretty simple: if something’s not working, every so often, stop to think about the “implicit” premises and beliefs that you don’t necessarily consciously consider on a daily basis, but that apply in a given situation.
Bottlenecks x Hyperbolic Discounting x Margin of Safety x N-Order Impacts
One of the most “salient” or “visible” bottlenecks that most people encounter is peak demand periods causing long queue lines, whether that’s rush hour in a major city, 7:00 P.M. on a Friday at a nice restaurant, or summer vacation at Disneyworld.
Let’s focus on the example of traffic on a highway (train routes on a subway might work as well). Most laypeople sitting there twiddling their thumbs might wonder “well, why can’t they just add more lanes/trains?”
Traffic engineers would tell you that it’s not that simple: you have to consider the n-order impacts (i.e. the “and then what” – the aftereffects of your decision once the world starts to respond to whatever changes occur.) It’s sometimes considered to be a “universal law” that adding more lanes to highways doesn’t relieve congestion, because it just incentivizes more people to use the highway – in other words, people who might have left the office half an hour later, or who might have chosen a job closer to home to avoid commuting, may instead take advantage of all that newly-created slack.
This argument doesn’t work forever or everywhere, of course; if you build a thirty-lane highway through the middle of rural Montana, it’s unlikely that a lot of traffic will magically appear. But it’s still worth considering: in both manufacturing environments and our personal lives, a bottleneck may still be a bottleneck after accounting for n-order impacts. If your bottleneck is a lack of time, and you want to free up some time so that you can fill it up with a bunch of other stuff, you might find yourself in the same place in six months… so perhaps the real key to freeing up your schedule is learning how to say no.
The flip side, considering margin of safety, is that something that might not seem to be a bottleneck right now could become a bottleneck if something else happens. For example, maybe you’re running on fumes as you drive down the highway, so you’re scanning the horizon for gas stations, and you fail to notice a pothole that takes out your front right tire.
Well, all of a sudden, the bottleneck for getting home from your road trip isn’t a lack of gas: it’s the lack of a tire. Hope you packed a spare. In the aforementioned To Engineer is Human by Henry Petroski (TEIH review + notes), he discusses the structural engineering concept of “alternate load paths” – a good structural design takes into account the potential impacts of some part failing, providing redundancy so that other parts of the structure are designed to pick up the slack if something goes wrong. This is discussed in more depth in the margin of safety mental model.
Similarly, anyone who’s been through a breakup knows that emotional stability can often serve as a major bottleneck for productivity – and yet we often don’t do enough to prepare ourselves for potential emotional instability thanks to hyperbolic discounting (our tendency to prioritize the present over the future). By being proactive rather than reactive, we can identify potential future bottlenecks and “debottleneck” them before they ever become issues that affect our quality of life.
Application/Impact: Don’t just look at bottlenecks as they are today, but ask what happens after you unconstrain a bottleneck.
Further Reading on Bottlenecks:
Mental Models related to bottlenecks / limiting reagents / weakest links:
As discussed, some of the mental models that closely tie in to bottlenecks are schema (our worldview), n-order impacts (the “what next?”), margin of safety (since the world is dynamic and bad things can happen), and incentives (which can often serve as a bottleneck in and of themselves). Unconstraining bottlenecks is a good example of the 80/20 / Pareto principle.
Books related to bottlenecks / limiting reagents / weakest links:
“The Goal” by Eli Goldratt (Goal review + notes) is the classic treatise on bottlenecks that I read and loved in business school. Written in the form of a novel about a plant manager who seeks the advice of a pretty much autobiographical consultant, it’s an engaging and thought-provoking journey through how the concept of bottlenecks (and other mental models!) can apply in both a business and personal context.
Bottlenecks are pretty much everywhere and you’ll start seeing them all the time if you look for them. However, some of my favorite examples of bottlenecks are in the books below: