If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.
Opportunity Costs / Tradeoffs Mental Model: Executive Summary
If you only have three minutes, this introductory section will get you up to speed on the tradeoffs / opportunity costs mental model.
The concept in one quote:One is obliged sometimes to give up some smaller points in order to obtain greater. - Benjamin Franklin Click To Tweet
The concept in one sentence: whenever we spend our money – or time – on one thing, we’re implicitly giving up our ability to spend that same money or time on something else – an “opportunity cost” or “tradeoff.”
Key takeaways/applications: As we’ll discuss, people often fail to consider opportunity costs in both their personal and professional lives; behavioral economics research indicates that many people fail to mathematically understand the concept. Being more aware of opportunity costs and tradeoffs can help us make more adaptive decisions and get more utility out of our lives with less investment of time and money.
Three brief examples of opportunity costs / tradeoffs:
The genesis of this site. Books that are concise and engaging provide you with more utility per unit of time invested than books that are verbose and dull/academic/overly detailed. The whole point of Poor Ash’s Almanack is to help you maximize your return on time invested by allowing you to figure out your opportunity cost of reading a given book on a topic – we’ve already found many of the best ones for any given mental model or field of study.
If you looked like Barbie, you’d shatter your tibia if you tried to walk. Tradeoffs are prevalent throughout biology. For example, in “ Other Minds” ( OthM review + notes), Peter-Godfrey Smith notes that by giving up their shells, octopi gained tremendous mobility – they can
“squeeze through a hole about the size of its eyeball.”
But obviously, the lack of a hard shell leaves them vulnerable to predation.
Similar tradeoffs are present with regard to intelligence: as Jennifer Ackerman notes in the delightful “ The Genius of Birds” ( Bird review + notes), some lower-IQ “precocial” birds can fly from the nest within days of being born.
Like baby humans, though, the “altricial” young of highly intelligent bird species like New Caledonian Crows require a long childhood (and perhaps a long, trying adolescence as well) for their brains to fully develop. There’s obviously a long-term payoff, but it doesn’t come free.
And intelligence itself is a tradeoff: Ackerman notes toward the end of the book:
Balancing on the tip of a needle. Perhaps no topic highlights dose-dependency and tradeoffs quite like the story of vaccine development for polio, overviewed phenomenally in David Oshinsky’s “ Polio: An American Story” ( PaaS review + notes) and, to a lesser extent, Meredith Wadman’s “ The Vaccine Race” ( TVR review + notes), which focuses more on what happened after polio.
Whether a vaccine is killed or live-attenuated, you have to keep the vaccine strong enough to provide protection without risking infection. There was, too, a tradeoff between using killed or live-weakened vaccines: Salk’s killed vaccine for Type 1 used the extremely virulent “Mahoney” strain, which caused better antibody production but also meant that, if not properly killed, the virus would be quite challenging.
On the other hand, the attenuated live-virus vaccines later developed used less virulent strains – but, obviously, ran a greater risk in the first place of causing polio if the strain wasn’t attenuated enough.
“not a single case of polio in the United States had been attributed to the Salk vaccine. No one could question its safety, if properly prepared.”
In contrast, the more powerful Sabin vaccine – even if prepared properly – caused about one case of polio per million doses.
So, with polio from vaccination then a greater risk than wild polio, the CDC started switching from Sabin back to Salk in the 1990s, and as of 2000, only the completely safe Salk vaccine is used.
If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.
However, if this doesn’t sound like something you need to learn right now, no worries! There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of theinversion, Bayesian reasoning, or sleep / rest mental models, or our reviews of great books like “ Onward” ( O review + notes), “ Scale” ( SCALE review + notes), or “ Nudge” ( Ndge review + notes).
Opportunity Costs / Tradeoffs: A Deeper Look
“Berkshire Hathaway is constantly kicking off ideas in about two seconds flat. We know we’ve got opportunity X, which is better than the new opportunity. Why do we want to waste two seconds thinking about the new opportunity?
… the right way to make decisions in practical life is based on your opportunity cost. When you get married, you have to choose the best [spouse] you can find that will have you. The rest of life is the same damn way.”
– Charlie Munger (via ValueInvestingWorld.com)
The above is pretty much all you need to know about return on investment: calculate it however you want, but compare it to your next-best opportunity, and if it’s not better than that, don’t waste time on it.
This applies to any sort of investment: purchases as a consumer, investments as a portfolio manager, or even the friends you hang out with. Spending time with one person or group of people inherently constrains your ability to spend time with another person or another group of people.
Pick your friends well, especially if you understand the power of social proof: if you hang around worthless losers and moral degenerates long enough, well… psychologists, and Charlie Munger, suggest that’s exactly what you’ll become. More of Munger’s wisdom is, of course, available in “ Poor Charlie’s Almanack” ( PCA review + notes).
Opportunity costs are a phenomenally simple concept to understand: just think of a kid “saving room for dessert.” Setting aside a brand-spanking-new scientific study that finds that boys actually have a second stomach set aside for desserts, general consensus is that our appetites are only so large. If we want to save room for a piece of pie, we’re gonna have to take one less scoop of mashed potatoes.
And yet in most areas of life, people behave as if they’re blithely ignorant of opportunity costs. Now, of course, everyone smart and thoughtful enough to be thinking about mental models probably doesn’t do that… but all of us need a refresher from time to time.
Before we get into some of the more specific interactions here – as well as those discussed elsewhere on the site (like in the margin of safety mental model) – I’d like to reframe tech-company “minimum viable product” mentality as an exploration of opportunity costs and perfectionism.
Going back to David Oshinsky’s wonderful “ Polio: An American Story” ( PaaS review + notes), one of the surprising takeaways is that Albert Sabin was truly a horrible person; for reasons that appear to amount mostly to ego, he did everything in his power to stand in the way of the Salk vaccine (which ended up saving countless thousands of kids’ lives.)
At one juncture, it was clear that Salk’s vaccine could be improved upon, but the rational attitude was taken by Tom Rivers and Joe Smadel, who argued something was better than nothing and that perfectionism slows progress. Smadel asked:
“Shall we use what we have now, or shall we wait an indefinite period […] until we have something which we think is perfect at that time, and then use it?”
On page 92 of “ The Happiness Advantage” ( THA review + notes), Shawn Achor notes that some lawyers start thinking about their kids’ baseball games as foregone billable hours… I’ve confirmed this to be true, by the way.
Be mindful to avoid hyperbolic discounting and evaluate opportunity costs over a global rather than local time horizon.
We’ll get to the local vs. global optimization model later, but if you want a base rate on the opportunity costs that people regret at the end of their lives – it’s not the money they could’ve made or the fame they could’ve achieved.
It’s the time they didn’t spend with their families, friends, and loved ones. It’s the things they didn’t do for themselves that they said they’d get to someday: the trips, the hobbies, the experiences.
Don’t believe me? Ask around. Or read Karl Pillemer’s “30 Lessons for Living” (30L review).
Opportunity Costs x Social Connection x Social Proof: Social Media and Cal Newport’s Pushback Against “Any-Benefit” and Shawn Achor’s Stunningly Insightful Takedown of Reading The News
One example of unconsidered opportunity costs that’s worth highlighting – briefly – because it’s so prevalent: the opportunity costs associated with social media.
Cal Newport’s “ Deep Work” ( DpWk review + notes) is pretty anti-social-media, and for good reason. It’s engineered to be addictive, some research indicates that social media use makes us feel worse, and much of it is low- utility; it’s also getting worse over time: older platforms like Facebook and LinkedIn at least encourage and enable long-term relationships with friends or colleagues we might not otherwise communicate with as frequently.
On the other hand, platforms like Twitter discourage deep, thoughtful content and interaction by prioritizing brief and recent information. Snapchat is the worst of all: unless you’re a spy, messages that self-destruct after a few seconds cannot possibly have any lasting value. If they were worth something, you’d want to save them. (See the discrete vs. recurring payoffs section of the utility mental model.)
Yet social media obviously has power; we all know about things going viral, we all know that many people like to engage with content creators via social media, etc. (I don’t use Twitter, but I do occasionally check the twitter feed of some of the Dallas Cowboys staff writers, for example, as there’s often content on there that doesn’t make it onto the main site.)
So neither I – nor Newport – would allege that social media has no benefit. But the problem is that many people, when it comes to technology, insist on using what Newport calls an “any-benefit”
Newport’s point – a good one, worth reading and considering in depth – is that there are not only short-term opportunity costs due to the distraction, but long-term ones as well. Newport argues that via a feedback / habit loop, becoming hooked on instant gratification erodes our ability to do real, powerful long-term work.
Indeed, recognition of the opportunity costs of distracting technology are not new: Richard Feynman, recounting a story from like 75 years ago in “ The Pleasure of Finding Things Out” ( PFTO review + notes), mentioned:
“the disease with computers is you play with them. They are so wonderful.”
I discuss the short-term opportunity costs of attentional switching, by the way, in the memorymental model, citing Newport’s work as well as that by Shawn Achor (“ The Happiness Advantage” –THA review + notes) and Don Norman (“ The Design of Everyday Things” – DOET review + notes).
Shawn Achor, makes a very similar point (to Newport’s) in “ Before Happiness” ( BH review + notes), pages 156 – 158 of which are stunningly insightful and definitely in my top-5 for “most useful book pages ever.”
Achor categorizes various types of information by utility and notes that in many cases, we’re consuming information that either has low utility, no utility, or even negative utility. He explains why most short-cycle news fits the bill: it won’t change our behavior, it doesn’t have a long-term payoff, and it doesn’t make us any better off.
So, we should eliminate it – and instead invest the newfound free time in something that does provide utility.
In other words: there’s an opportunity cost to being up on the news that nobody considers. We’ll explore this in more depth in a professional context in the next section.
Application / impact: just because something’s new and cool and everyone uses it doesn’t mean you have to; you need to weigh the benefits against the costs.
Opportunity Costs x Correlation vs. Causation x Precision vs Accuracy (x Local vs. Global Optimization x Utility x Sleep/Rest)
Local vs. global optimization, which has its own model page, can be thought of as a special case of the tradeoffs mental model: tradeoffs with respect to time. Sometimes, to get what you want in the long-term, you have to make uncomfortable decisions in the long-term; conversely, sometimes short-term happiness requires foregoing long-term benefit.
An understanding of the local vs. global optimization model is critical because it allows us to easily frame – and evaluate – whether advice we receive makes sense. For example, elsewhere on this site (the willpower mental model), I discuss why “ grit” is lunacy.
Similarly, in the notes on Alex Soojung-Kim Pang’s “ Rest” ( Rest review + notes), I explore why it’s idiotic that Soojung-Kim Pang suggests readers attempt to boost their performance on certain tasks by intentionally depriving themselves of end-of-cycle REM sleep (the natural consequence of waking up earlier)- whatever short-term benefits may or may not exist (and they’re likely to be few), they pale in comparison to the massive short-term and long-term ramifications, overviewed by Dr. Matthew Walker’s “ Why We Sleep” ( Sleep review + notes).
One topic that I think is worth diving into here is precision vs. accuracy (discussed in more depth in the product vs. packaging mental model), and how it interacts with tradeoffs, utility, and correlation vs. causation.
To briefly level-set for anyone who isn’t familiar with the terms, “precision” generally refers to the level of detail or the number of decimal points involved in an analysis, while “accuracy” generally refers to whether or not the analysis gets it right.
Precision is important in many fields (rocketry, medicine) but here’s we’re talking about business. For example: the classic McKinsey goof wherein they predicted there would never be a market for cell phones in America was probably very precise, with lots of detailed analysis. It was also totally inaccurate.
As for correlation vs. causation, most of y’all probably already understand the difference, but I never get tired of the following quote from Nate Silver’s thoughtful “ The Signal and the Noise” ( SigN review + notes):Ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there's no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Haagen-Dazs. - Nate Silver Click To Tweet
The general consensus among many smart thinkers – including Charlie Munger – is that precision is usually to impress other people, and often correlates poorly with the accuracy of any analysis.
But in the interests of not being overconfident or stuck to an ideology, I was interested to see a contrasting perspective in Philip Tetlock’s wonderful “ Superforecasting” ( SF review + notes). It’s a great book on the whole, with one caveat I’m about to discuss.
On pages 144 – 146, Tetlock discusses how “superforecasters” who made the most accurate predictions were also the most precise. If you haven’t read it yet, you might consider buying it and reading it before reading the following discussion, as it’ll make a lot more sense that way, and Tetlock has lots of really important points.
Anyway. In the book, Tetlock, summarily, notes the following:
“ordinary forecasters were not usually that precise. Instead, they tended to stick to the tens […] 30% likely, or 40%, but not 35%, much less 37%.
Superforecasters were much more granular […] the tournament data [… shows…] that granularity predicts accuracy.”
Elsewhere, Tetlock advocates for precision in terms of timelines:
“a forecast without a time frame is absurd.”
After carefully reading his book, considering the methodology of his forecasters, and contextualizing that information with everything I’ve done and everything I’ve read, I think Tetlock’s wrong.
In fact, I think there are actually two correlation vs. causation issues here. And they’re important to analyze because the process of being precise, in the real world, necessitates making tradeoffs that I don’t believe are favorable – i.e., the value of precision in many business applications is not sufficient to offset the opportunity cost of time spent elsewhere, which would generate more utility.
Let’s start with the first correlation vs. causation issue: does attempting to make a precise, decimal-point forecast cause accuracy to increase for Tetlock’s forecasters, or is precision simply an n-order impact ( unintended consequence) of another underlying process, thus being correlated with accuracy?
I’d argue for the latter. One of Tetlock’s major points is that “superforecasters” tend to take the MBB “ MECE” approach (discussed in the disaggregation mental model), building a decision tree and coming up with a lot of additive conditional-probability scenario-weighted answers.
For example, one of Tetlock’s forecasting questions involved the likelihood of some agency finding polonium residue in the forensic evidence of some dated assassination. A superforecaster would set out to analyze that question using a process like the following (in more detail – this is just a directional example):
The probability of polonium being detectable is 70%, the probability of it not is 30%
IF polonium is detectable, the probability of Israel having used it is 60% and the probability of them not having used it is 40%.
IF Israel used it, the probability of polonium still being detectable is 85%.
So the total probability is 0.7 x 0.6 x 0.85 = 35.7% (call it 36%).
Those are dumb fake numbers for illustration. The point is that the process of building a decision tree sort of necessarily leads you to single-digit, or even decimal-point, precision, even if you aren’t intentionally trying to be precise.
That doesn’t mean, of course, that shooting for decimal-point precision – for example, in a valuation model, by trying to forecast every variable precisely over a long time horizon – is going to get you any closer to the right result. It’s just that in this case, precision was a function of the right thinking process, so it ends up being correlated with the right answer, but it’s not the cause.
That’s the first correlation vs. causation issue. Even if you’re not convinced, there’s a second (and then we’ll get to tradeoffs). Jordan Ellenberg’s “ How Not To Be Wrong” ( HNW review + notes) makes a point that is very infrequently made – that the marginal utility of gathering extra data can be low if the new data doesn’t tell you anything.
Ellenberg uses (sort of) the example of certain body features being correlated to one another: if you’re taller, you probably have longer arms too, so knowing that you have long arms, given that we already know that you’re tall, doesn’t really tell us very much new about you.
Phil Rosenzweig’s “ The Halo Effect” ( Halo review + notes) makes similar points in a completely different context, examining why storytelling and other fallacies can lead to business case studies being wildly misinterpreted. One of his conclusions:
For example, Rosenzweig discusses how observers often mix up direction of causality… for example, given that growth and profitability cover a lot of flaws, employees at companies that are doing well are often more satisfied than when things aren’t going well – so which causes which?
To extend Rosenzweig’s point: if a company’s financials are doing well and everyone’s getting bonuses ( incentives) and nobody is feeling much stress, then employees will probably be pretty happy, too.
Conversely, if a company’s financials are struggling and nobody’s getting a lot of bonuses, everyone has to do more work because of belt-tightening, and there’s generally more pressure and stress, employees are probably going to be less happy.
The point isn’t that employee engagement is irrelevant – far from it. See, for example, Kip Tindell’s “ Uncontainable” ( UCT review + notes) – The Container Store built a difficult-to-replicate business on the foundation of empathy and employee engagement.
In some cases, looking at things too long can actually hurt you – in Jerome Groopman’s “ How Doctors Think” ( HDT review + notes), for example, he notes that after 38 seconds, radiologists begin to see things on X-rays that aren’t there.
Where’s the opportunity costs angle here? Well, the final piece of Tetlock’s recommendation is the importance of a specific time-horizon, and frequent updating.
On time horizon, for example, here, Tetlock discusses the accuracy of one prediction, on:
“Will Italy restructure or default on its debt by 31 December 2011?”
This obviously has big implications if, say, your job is to price short-term credit default swaps on Italian debt – but for a generalist, it’s probably less important if Italy defaults on its debt by December 31st, or sometime the following month, or even sometime the following year. It would be a bit of a Pyrrhic victory if your “prediction” was right, but the bond defaulted the next day or month and you lost all your money anyway.
Moreover, the utility of making such a prediction precisely has to be weighed against the opportunity cost of using that time to make less precise, but more useful predictions about other things – for example, if the yield to maturity on Italian debt is only modestly higher than the yield to maturity on some other debt elsewhere that you feel has a much lower chance of defaulting (without a lot of analysis required), then why bother with the Italian debt at all?
Half a world away, in “ The Landscape of History” ( LandH review + notes), historian John Lewis Gaddis makes a very similar point about detail: the deeper you go, the less ground you can cover. So it’s a tradeoff between going very deeply into one topic or time horizon, and going into more moderate detail on many more.
Tetlock notes that many superforecasters use Google Alerts to follow new data points closely; one political scientist who was invited to participate dismissively called participants “unemployed newsjunkies.”
That guy’s attitude is, in some senses, appropriate – not to take anything away from the superforecasters, but there’s a reasonable utility point to be made here. Taking the melting sea ice question, for example, it’s probably less relevant what the exact measurement in sea ice on any given day is (unless you’re trying to navigate a boat through the Arctic, I suppose) and more relevant whether or not sea ice is melting over the long term (the base rate) and what the consequences of that for society might be. (I haven’t researched climate science whatsoever and am thus unqualified to have an opinion.)
Tetlock does provide some caveats at the bottom of the page, but he doesn’t really discuss the utility or opportunity cost angle with regard to precision or frequent updating here: the superforecasters were optimizing for the lowest Brier score (i.e., the most accurate set of decisions) over a given set of problems.
But of course, that’s an artificial set of constraints, and optimizing the “model” doesn’t always mean optimizing the real-world utility. Think back to page 128 of The Signal and the Noise (SigN review + notes), Nate Silver quotes Dr. Bruce Rose, the principal scientist at The Weather Channel:
“the models typically aren’t measured on how well they predict practical weather elements. It’s really important if […] you get an inch of rain rather than ten inches of snow.
That’s a huge [distinction] for the average consumer, but scientists just aren’t interested in that.”
As Tetlock says elsewhere in the book, you get what you measure – and if you measure for Brier scores, you get Brier scores. In the real world, many of the Good Judgment Project questions are useless and trivial, whereas some are probably very important, and many important questions weren’t asked at all.
So, for example, outside of the game, the right answer to the Arctic sea ice question is to use the long term average, call it a day, and move on to some bigger/better question – it’s totally irrelevant what your Brier score on that one is.
It’s obvious when you put it like this because most readers are likely not going to have any trouble not caring about the exact measurement of some sea ice somewhere – but in a business context, altogether too often, business leaders focus on precisely tracking lagging rather than leading indicators.
Does it really matter, on a week-to-week basis, what your market share is of projects that are bid? Or should you rather be focused on providing great customer service, doing R&D to deliver top-notch products, and making sure your cost structure is low so you can bid aggressively but still deliver solid margins?
Put in this context, given that time in a day is limited, there’s a huge opportunity cost to having overly-precise, frequently-updated forecasts for stuff that just doesn’t matter in the long run, and you should always keep in mind what the utility – or lack thereof, of any given forecast/decision is, so you can allocate time appropriately.
In my line of work as a value investor, trying to track every leading indicator for, say, comps next quarter for some restaurant, would be a total waste of my time. It doesn’t matter whether they’re down 3.2% or 1.3% or up 1.2%. It’s just totally irrelevant. If the difference between success and failure is 100 bps of comps, it’s not a two-foot hurdle and I’m doing it wrong.
I use Google Alerts sometimes, but usually only for “big” stuff – for example, one investment I had was fairly heavily reliant on the availability of FHA mortgages, so if the government made any meaningfully restrictive moves on credit availability via the FHA, it would’ve been a clear negative, and I would have wanted to know that. So I set up weekly Google Alerts for a variety of relevant search terms. Nothing ever came of it, but I don’t regret doing it.
In general, though, frequent updating and precision – for the reasons Ellenberg and Rosenzweig mention – don’t seem to contribute a lot of value, if you factor in the opportunity cost of being able to use that time to instead go get the “80” somewhere else – instead of the “20” on this individual item.
Application / impact: beware of opportunity costs and marginal utility when deciding whether or not incremental effort in a given direction is worthwhile.