Local vs. Global Optimization Mental Model

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Executive Summary: Local vs. Global Optimization Mental Model

If you only have three minutes, this introductory section will get you up to speed on the local vs. global optimization mental model.

The concept in one sentence: a myriad of factors, ranging from loss aversion to schemabottlenecks, can lead us to get “stuck” at “local optimums” that aren’t “global optimums” – i.e., we’ve reached the top of a small hill and have nowhere to go but down, which is what’s necessary to find our way through the valley to the next, higher and better hill.

Many publicly-traded companies face local vs. global optimization challenges because of the pressure to meet shareholders’ earnings expectations in the short-term, which can disincentivize the management team from pursuing long-term opportunities that might be costly in the short-term.  Similarly, many professional investors are afraid of making investments that might pay off over a multi-year horizon but could look bad enough in the meanwhile to cost them their bonuses… or jobs.  Not pictured: there are piranhas in the Rapids of Next Quarter’s EPS, just to make your day better.  🙂

Key takeaways/applications: becoming more aware of situations in which there may be a tradeoffbetween what’s best in the long term and what’s best (or easiest) in the short-term helps us take concrete structural problem solving steps to ensure long-term health, wealth, and happiness.

Three brief examples of local vs. global optimization:

Intra-business.  Local vs. global optimization problems are common within organizations, classically called “principal-agent” conflicts.  Individuals or departments often prioritize their own immediate needs and interests over what’s best for the entire organization, which – in most cases – would be best for everyone.  In some cases, everyone’s working together well, but heading up the wrong hill because the organization has the wrong goal.  Properly-structured incentives and organizational culture can help here.  Howard Schultz thoroughly dissects in “ Onward ( O review + notes) how an overfocus on “comps” (same-store sales) led to Starbucks losing its way in the mid-2000s.

Inter-business.  Local vs. global optimization problems are also faced by entire businesses vis-a-vis their competitive landscape.  Mature businesses are often too focused on defending their own hill to notice there’s a better hill nearby.  The history of business is littered with these to the extent that it’s a trope: “disruption” – the classic treatise on which is Clayton Christensen’s “The Innovator’s Dilemma” (InD review + notes).

Why did a retail giant like Wal-Mart, with all the resources in the world, get caught flat-footed in e-commerce by Amazon?  One answer (of many) is that Amazon had nothing to lose by going all-in on e-commerce, whereas for Wal-Mart, it would mostly have cannibalized existing in-store business.

I’ll diet and exercise… after the holidays.  Local vs. global optimization problems even apply in our own lives, in areas as varied as procrastination, financial decisionmaking, and diet/exercise, as I’ll discuss in more depth. 

Research (and personal experience) demonstrates the concept of “hyperbolic discounting” – which leads to a “ planner-doer” dichotomy, where our lives are a struggle between long-term planning and short-term desires.

If this sounds interesting/applicable in your life, keep reading for deeper understanding and unexpected applications.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our discussion of the “ n-order impacts,” “ product vs. packaging,” or “ culture / status quo biasmental models, or our reviews of great books like “ How Not To Be Wrong” ( HNW review + notes), “ The Halo Effect” ( Halo review + notes), or “ The Great A&P” ( GAP review + notes).

 Local vs. Global Optimization Mental Model: Deeper Look

Sometimes you have to go REALLY far down to reach the higher hill… most hill-climbing algorithms, like public companies or me on a hike, are pretty gosh-darn content with the hill they’re on.

“Hill climbing: analogous to climbing a hill blindfolded.  Move your foot in one direction. If it is downhill, try another direction.  If the direction is uphill, take one step.

Keep doing this until you have reached a point where all steps would be downhill; then you are at the top of the hill, or at least a local peak […] Although it guarantees that [you] will reach the top of the hill, what if [you are] not on the best possible hill?  

Hill climbing cannot find higher hills: it can only find the peak of the hill it started from.”

the incomparable Don Norman on page 281 of “ The Design of Everyday Things ( DOET review + notes)

Local vs. global optimization is a pretty simple concept to understand, so I won’t belabor the mechanics.  I first learned about it in an operations management class, where the professor discussed that some linear optimization algorithms (such as Excel’s “Solver”) take a similar “hill-climbing” approach to that described by Norman above.

The algorithm does not “see” the higher peak ahead, nor does it care: it’s doing what it’s getting paid to do, just like many business managers.  The professor assigned Eli Goldratt’s “ The Goal” as class reading, and as I discuss in The Goal review + notes, it’s a solid read on the topic.  As one of the characters, Jonah, explains on page 61:

“Just remember we are always talking about the organization as a whole – not about the manufacturing department, or about one plant, or about one department within the plant.  We are not concerned with local optimums.”

An example of this has to do with maintenance and rust.  Rust, as Jonathan Waldman explores delightfully in “ Rust: The Longest War ( Rust review + notes) is completely not  vivid or  salient.  That left it open to suffering from local vs. global optimizationproblems for the Department of Defense:

“Engineers in Dunmire’s office cite competing incentives as a cause of much of their rust troubles.  [DoD personnel…] get graded on performance, schedule, and cost. […] if the missile [fired to Mars this year] rusts to hell [a year later], that’s not his problem.  […]  To save money, they use cheap fasteners.  

They use cheap paints […] by the time the maintenance bill comes through, they figure, I’ll be gone.  It’s not hard to race rust and win.  Officers get their stars, and assets get treated like orphans.”

Subsequently, structural problem solving via a change in  culture and a change in  incentives – acknowledgement of this reality in contracts – helped solve the problem, as Waldman explores.

Frequent PAA readers know that I try really hard to provide new and unique insights here rather than rehashing the obvious ones you can read in lots of places.

So, while I’ll cite plenty of great books in the recommendation section touching on local vs. global optimization in classic business contexts like “disruption,” here I’m going to try to provide some more novel interpretations.  (If you’re impatient, go ahead and order “ Onward by Howard Schultz ( O review + notes), “The Container Store” by Kip Tindell (TCS review + notes), “ The Everything Storeby Brad Stone ( TES review + notes), or “ Made in America by Sam Walton ( MA review + notes)  – it turns out Walton was a disruptor himself.)

Local vs. Global Optimization x Loss Aversion x Hindsight Bias xIncentives x Marginal Utility

That’s a lot of mental models, I know, but as Munger likes to put it, the world is “one damn relatedness after another.”

In Richard Thaler’s “ Misbehaving – my favorite book of all time ( M review + notes) – he looks at principal-agent problems with a different schema than most people do on pages 188 – 190.  Classically, principal-agent problems are framed in a somewhat moral perspective; Thaler notes that the literature typically presumes executives make poor decisions because they are maximizing their own welfare rather than that of the organization.”  An example that investors often bemoan is executives issuing themselves generous low-risk, high-reward options packages.

However, Thaler goes on to add:

“Although this description is often apt, in many cases the real culprit is the boss, not the worker.”  

Why?  Thaler incorporates a number of factors, including expected valuehindsight biasloss aversionluckmemory, and incentives, to explain what happens.  In most organizations, executives should (Thaler argues) be compensated on the basis of making ex-ante good decisions with a positive expected value.  The problem is that after the fact, if it pays off, the boss may, thanks to hindsight bias / flawed memory, believe it was a bad decision all along (because it didn’t work)… and therefore punish the executive who made the decision.

Just to be clear, this is not the kind of “hindsight” we’re talking about… although in hindsight, that guy probably came to regret this moment in his life (or at least he would’ve if it wasn’t totally staged). Photo credit: the internet

Thaler argues a more effective way would be to broaden the problem framing  and evaluate managers’ performance in a broader context: if you set up anincentives structure where people are disproportionately punished for losses in the context of their individual budget, but only moderately rewarded for equal gains (i.e. institutionalized loss aversion), then you’re gonna get a bunch of risk-averse managers.

This is exactly what happened with a large company that Thaler was working with: when presented with the potential for each of his 23 executives to make a meaningfully positive +EV bet, the CEO obviously would have wanted all 23 bets to be made – but only three of the 23 executives would actually have made the bet, even theoretically, let alone in practice!  

One of the managers, in fact, highlights the loss aversion hypothesis I present above – the manager noted (according to Thaler) that if the bet worked, he might get a bonus equivalent to a few months’ work… if the bet didn’t work, he thought there was a good shot at him being fired.  Clearly the personal marginal utility of making the bet on an expected-value basis is meaningfully negative, despite the marginal utility for the firm being meaningfully positive… hence the local vs. global optimization problem..

Application/Impact: Poorly-designed incentive structures, or behavioral biases, can cause unintentional local vs. global optimization challenges within an organization.  Thaler extends this concept to an organization of one (ourselves) with the “ planner-doer” model of human behavior and decision-making that is intriguing enough that I view it as its own special mental model.

Local / Global Optimization

Social Connection

This is a short but hopefully thought-provoking exploration.  One of the more intriguing and unusual applications of the local vs. global optimization model that I’ve run across is with regards to social connections.

It’s touched upon by Shawn Achor in “ The Happiness Advantage ( THA review + notes) and “ Before Happiness ( BH review + notes), where he discusses, for example, the tendency of some students to isolate themselves to study when they’re stressed – which research demonstrates is precisely the wrong response thanks to the medically-demonstrated stress-busting properties of social connection.  Dr. Judith Beck observes many of the same phenomena in “ Cognitive Behavior Therapy ( CBT review + notes).

What’s going on mentally is that these students are making decisions that seem reasonable (perhaps even optimal) in the short term, but they unintentionally end up climbing the wrong hill.  This extends well beyond the dormitory: Achor notes in Before Happiness that as we pursue our careers, sometimes we’ve highlighted the wrong meaning markers and chosen a path studded with negatives, rather than the things we truly care about.”  The antidote is stopping every so often to make sure that our daily decisions are still steer[ing] us toward accomplishing meaningful goals.”  

Of course, Achor’s not the only one to point this out: Stephen Covey did so decades previously in the landmark “The 7 Habits of Highly Effective People”:

“To begin with the end in mind means […] to know where you’re going so that […] the steps you take are always in the right direction.  It’s incredibly easy to get caught up in an activity trap, in the busyness of life, to work harder and harder at climbing the ladder of success to discover it’s leaning against the wrong wall.  It is possible to be busy – very busy – without being very effective.”

7 Habits does a fantastic job of deeply exploring how principles like these apply in life and business ( 7H review + notes).  Even more broadly, husband-wife psychiatrist team Jacqueline Olds and Richard Schwartz tackle this problem at the scale of America as a whole in “ The Lonely American ( TLA review + notes).

While I’m not an unabashed fan of TLA for a few reasons, I do think their explanation of how making locally optimal decisions that have the unintended (or intended) consequence of isolating ourselves from our social ties can lead to a place that we didn’t intend to get to:

“The argument that people are happier when they can spend more time alone seems to make so much sense on a daily basis,” Olds/Schwartz acknowledge, “yet over the course of a life (and a country’s life) it is simply wrong,” citing research on health and happiness outcomes that corroborates their point.

TLA is, unfortunately, somewhat short on helpful advice on how to counter these types of phenomena, other than the fairly obvious recommendation to join a choir.”  The authors do note, however, that “people […] somnambulate their way to lonely despair without even recognizing how or why they are doing it.  If one can bring oneself to acknowledge loneliness, half the battle is won,” and the book does a great job of driving that realization home.

Application/Impact: As with many of life’s “hard problems,” there are perhaps no easy answers for how to bridge the highly fragmented nature of modern life – grew up here, college there, graduate school elsewhere, first job left, next job right – with the beneficial long-term impact of having close social ties.  However, simply understanding the research and the process by which innocuous, even laudable minor decisions can lead us down a path we don’t intend to travel can help mitigate the results.

Local vs. Global Optimization x Confirmation Bias x Self-Justification x Ideology

Last one, y’all!

An important part of becoming more rational by building a toolbox of mental models is adopting a growth mindset and realizing, as Dr. Beck notes in “ Cognitive Behavior Therapy” CBT review + notes), that even if you’re aware of your own automatic thoughts – a product of your schema  – you most likely accept them uncritically […] you don’t even think of questioning them.”  

It’s not hard to see why most people do this: it’s habit… it’s comfortable… it’s easy.  And yet, from a therapeutic standpoint, “the quickest way to help patients feel better and behave more adaptively” is to get them to identify these thoughts, the core beliefs those thoughts are based on, and replace those core beliefs with more adaptive ones.  Why?

Once they do so, patients will tend to interpret future situations or problems in a more constructive way.”  

Or, as she notes later in the book, becoming aware of these thoughts helps us to automatically do a reality check and spontaneously (i.e. without conscious awareness) respond to the thought in a productive way.”  

This is, of course, analogous to Philip Tetlock’s observation on page 236 of Superforecasting:

My sense is that some superforecasters are so well practiced in System 2 corrections – such as stepping back to take the outside view – that these techniques have become habitual.  In effect, they are now part of their System 1.”

Indeed, outside of a mental health context, the exact same process Beck describes in her book can be utilized to test your existing beliefs against the world and, if necessary, replace them with more adaptive beliefs (i.e., accurate / useful mental models).  Yet this can often require going through a “valley of pain” (like the one I drew) in the short term.

Why?  Well, there are a lot of reasons.  For example, I used to adhere to the MBB consultant paradigm of “often wrong, never in doubt.”  I was opinionated… I was loud… I was certain that I, in all of my less than a decade of sentient quasi-adulthood, could come to the right conclusion on any topic without any research whatsoever because I was just that smart.

That was obviously dumb.  And yet the world, perversely, tends to incentivize this sort of view by providing positive feedback for it.  I discuss this more in the overconfidence and  probabilistic thinking mental models.  As Philip Tetlock notes repeatedly in the revelatory “ Superforecasting ( SF review + notes), confidence is sexy.  It plays well on TV to (maybe literally) bang your fist on the table and espouse a hardline, provocative view.

But that’s exactly the wrong way to go about things.  A substantial body of research suggests that being a “hedgehog,” or a man with a hammer – i.e., someone who tries to cram the world through one ideology without a lot of nuance – is a less effective way to operate than being a “fox” who, as John Lewis Gaddis discusses in “ The Landscape of History” ( LandH review + notes), instead uses different paradigms to explain different phenomena, thereby coming to a more nuanced view.

Are you starting to see the local vs. global optimization angle here?  If you want to get to the “global optimum” of being a world-class forecaster (or thinker in general), you need to make locally non-optimizing decisions.  How are people around you likely to perceive you if you start communicating in the nuanced, “many-handed” way that Tetlock’s eponymous superforecasters did?  The sad answer is “not well.” On pages 138 – 139 of Superforecasting, Tetlock explains,

“people equate confidence and competence[…] one study noted, ‘people took such judgments as indications that the forecasters were either generally incompetent, ignorant of the facts in a given case, or lazy, unwilling to expend the effort required to gather information that would justify greater competence.’”

Here is my visualization of the Tavris/Aronson “pyramid” of self-justification. You’re happiest at the top, where you’re totally honest with yourself; each step down represents a lie or a mistake. It’s obviously easier to go with gravity/momentum and take another step down… and harder to turn around, look up at how hard you’ve fallen, and take that first step up. In other words, to reach the global optimum, you first have to take a bunch of locally suboptimal, uncomfortable steps, analogous to a company that needs to invest meaningfully (and take a hit to near-term earnings) to stay ahead of important shifts in the competitive landscape.

Setting aside how other people perceive us, a similar problem can be encountered in our own personal development.  Tavris and Aronson deeply explore the concept of self-justification in “ Mistakes were MadeMwM review + notes), using a very illustrative “pyramid” model to explain how we can come to believe our own lies and excuse our own mistakes (even if they’re heinous).

Without going too deep into the details – see  contrast bias for more – they note that the process of deceiving ourselves starts with a small, relatively innocuous step… but each further step takes us down the edge of a pyramid.

The First Rule of Holes is “stop digging,” but at any given point in time, the easier step to take is down, not up – perpetuating the lie or refusing to admit the mistake to protect our ego – even though that step leads us in a direction we don’t want to go.  They note succinctly on page 9 that “mindless self-justification, like quicksand, can draw us deeper into disaster.”

In my years of reading and mentoring, I’ve noticed that the hardest step for many people – whether as value investors, mental model builders, or otherwise – is to translate theory into action, because you have to confront the unfortunate reality that this stuff is hard.  There’s always an easier path than the globally optimal one.

It’s much more pleasant in the short term to succumb to confirmation bias and tune out information that conflicts with our existing views… feedback that we don’t want to hear, or reasonable arguments that some of our closely-held ideology may not make a whole lot of sense after all.  It’s a bitter pill to swallow to admit to ourselves that yeah, we were wrong, or that maybe we’re not as good at X as we thought we were, or we totally whiffed on the opportunity to do Y when we should’ve.

But Tavris/Aronson note in Mistakes were  Made (but not by me) – MwM review + notes – that in the context of marriages, politics, and a number of other fields – that acknowledging these mistakes and adopting a growth mindset is the only reliable way forward.  That is the globally optimal decision that will lead to the most successful and fulfilling life… but it requires making some decisions that are locally uncomfortable along the way.

Again, don’t forget to check out the awesome planner-doer hyperbolic discounting model (from Thaler) for more on local vs. global optimization with respect to time in our own lives: i.e., why we have a hard time saving, dieting, and so on.

Application/Impact: if getting to “optimal” was easy, everyone would do it, but there’s always a tempting brownie between us and our fitness goals, or a mindless TV show between us and that book we’re reading (or writing).  Having a clear goal of where we want to get to, and ensuring that we don’t allow ourselves to coast and make easy-but-bad decisions in the short term, can help ensure we reach that global optimum.