Humans vs. Econs Mental Model

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Executive Summary Of The Humans vs. Econs Mental Model:

If you only have three minutes, this introductory section will get you up to speed on the humans vs. econs mental model.

The concept in one quote:

“If you look at economics textbooks, you will learn that homo economicus can think like Albert Einstein, store as much memory as IBM’s Big Blue, and exercise the willpower of Mahatma Gandhi.  Really.

But the folks that we know aren’t like that.  Real people have trouble with long division if they don’t have a calculator, sometimes forget their spouse’s birthday, and have a hangover on New Year’s Day.  

They are not homo economicus; they are homo sapiens.”

– Cass Sunstein and Richard Thaler in “ Nudge” ( NDGE review + notes)

Key takeaways/applications: one of the most common schema bottlenecks around is that we tend to see the world either through the lens of our own perspective, or through the lens of desire bias (the way we’d like the world to be).  It would be awfully convenient if people acted in an orderly, logical faction, but a lot of times they don’t, and we have to design our lives (and businesses) to accommodate that reality.

Three brief examples of humans vs. econs:

In the classic South Park episode “Awesome-O,” Eric Cartman poses as an omniscient robot and Hollywood swoops in to use him to come up with guaranteed-blockbuster film scripts. As Megan McArdle explains in “The Up Side of Down” (UpD review + notes), this is actually profoundly unrealistic – because movie-viewers are unpredictable humans, not rational econs. (Citing the Music Lab experiment, Michael Mauboussin makes a similar point about big music hits in “The Success Equation” – TSE review + notes).

Full rational “econs” would never sit on the couch for an hour watching a TV show they don’t really like because the remote is a few steps away… yet because of finitewillpowerstatus quo biasand activation energy, real humans behave this way.  Like many traits, this reality is neither good or bad: it can be adaptive under certain circumstances if we harness it the right way.

“Econs” would always respond perfectly to incentivesand maximize their own charitable self-interest, but under various circumstances real “humans” act differently: they value fairness and thanks to social proof, are sometimes willing to give up some of their “rightful” property to be seen as fair… but in other cases (like divorce court and simulated lab experiments), humans will intentionally make themselves worse off to spite someone else who wronged them.  This is a critical understanding in negotiation, as discussed in Fisher/Patton/Ury’s “ Getting to Yes ( GTY review + notes).

Finally, “econs” would never discount hyperbolically and be subject to recency bias or saliencevividness bias, but real humans do – rendering the “efficient market hypothesis” and much of the academic finance based on it (like the Fama-French model) into complete nonsense, as thoroughly demonstrated by Thaler, Howard Marks, and others.

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our discussion of the network effectsman-with-a-hammer, or planning fallacy mental models, or our reviews of great books like “ To Engineer Is Human” ( TEIH review + notes), “ The Goal” ( Goal review + notes), or “ The Violinist’s Thumb” ( TVT review + notes).

A Deeper Look At The Humans vs. Econs Mental Model:

“Amos [Tversky] asked Mike [a professor who believed in the rational-actor economics premise] to assess the decision-making capabilities of his wife.  Mike was soon regaling us with stories of the ridiculous economic decisions she made, like buying an expensive car and then refusing to drive it because she was afraid it would be dented… [he] rattled off silly mistakes [his students] made, complaining about how slow they were to understand the most basic economics concepts…

Then Amos went in for the kill.  “Mike,” he said, “you seem to think that virtually everyone you know is incapable of correctly making even the simplest of economic decisions, but then you assume that all the agents in your models are geniuses.  What gives?”

– “ Misbehaving” by Richard Thaler, p. 51 ( M review + notes)

In Don Norman’s phenomenal “ The Design of Everyday Things,” Norman – a former engineer himself – cites the classic engineer’s frustration:

“If only people would read the instructions[!]”

Norman, like Sunstein/Thaler, recognizes that we’re all humans, not econs, the foundation of his “human-centered design” approach – or, what I call structural problem solving.

It’s a crime against humanity that Daniel Kahneman’s dull, pessimistic “Thinking Fast and Slow” is cited so frequently (TFS review): Thaler’s “Misbehaving” (M review + notes) is an infinitely superior read on behavioral economics and cognitive biases, so much so that it’s my favorite book of all time.

While you should definitely check out that model (it’s probably my favorite), one of the key takeaways is that you can’t go around expecting people to behave the way they should… you should expect people to behave the way they do.

This seems obvious, but research by Thaler and others demonstrates clearly that it’s not.  Why? First of all, there tends to be selection bias for the engineers, lawyers, and business people who design, regulate, and sell the products in the world around us: they tend to be analytical, and thus imagine the rest of the world is analytical too.

Second, people in these professions are often provided with formal education on important concepts like sunk costs and opportunity costs, which Thaler’s research suggests has the effect of making them think more like “econs.”

This is generally good, in the sense that we usually should think like econs, but it has a perverse n-order impact: it can blind business leaders to “human” concerns like fairness or memory, leading to policies or products that make total theoretical sense but don’t work out in the real world.

Of course, as Thaler’s “ Misbehaving illustrates vividly, even economists aren’t immune from acting like humans when it comes to their theories (and their offices).  Similarly, Jerome Groopman notes in the fantastic “ How Doctors Think ( HDT review + notes) that doctors are humans, too:

Most people assume that medical decision-making is an objective and rational process, free from the intrusion of emotion […] yet the opposite is true.”

The concept is pretty straightforward and I know y’all get the point by now, so let’s dive straight in to the interactions.

Humans vs. Econs x Incentives

Many readers may be aware of the famous Charlie Munger quote on “two-track analysis” as it relates to investments:

“Personally, I’ve gotten so that I now use a kind of two-track analysis. First, what are the factors that really govern the interests involved, rationally considered? And second, what are the subconscious influences where the brain at a subconscious level is automatically doing these things – which by and large are useful but often malfunction?”

The longer I’ve spent analyzing companies, the more I realize this is true: most public company CEOs’ chief complaint about investors is that they view companies as disembodied numbers floating on a spreadsheet rather than actual living, breathing entities that (being comprised of humans) have all the standard challenges of humans.

Two specific investment case studies from my own experience highlight this phenomenon – one in this section, one in the next.  The first was possibly the worst mistake I’ve ever made: investing in a specialty equipment-rental company that appeared to have significant hidden asset value.

The company’s subsequent complete failure (and I literally mean “failure” – one of the company’s subsidiaries went bankrupt, and the other sold pennies on the dollar) was multicausal, of course – one of the big problems was path-dependency thanks to heavy leverage.  (Like many mistakes, this one proved to have a very positive long-term impact, as it made me extraordinarily averse to leverage, among other factors.)

But one of the angles that I failed to consider was the human angle.  The company’s Board of Directors included several large shareholders who had used their entrenched position to help themselves to egregious compensation (relative to the company’s woeful struggles).  As the share price declined and the company circled the drain, outside shareholders mounted a proxy battle with widespread support.

It was, rationally speaking, in the Board’s best interest to comply with shareholder demands: the value to be unlocked from their ownership stake was massively larger than their oversized fees, and I (and several other shareholders) therefore assumed that they’d do the “rational” thing.

If only we’d spent more time reading Peanuts and less time reading loan docs…

Tavris/Aronson’s fantastic “ Mistakes were Made (but not by me)” ( MwM review + notes) was apparently preempted by Charles Schulz decades prior… of course, Linus is more “human” than “econ” if you take away his blankie.

We failed to take into account the endowment effectfairnessself-justification, and loss aversion: people overvalue what they have and, over time, come to feel entitled to whatever it is that they have (even if it was ill-gotten.)  Even though it was profoundly irrational (emphasis on “profoundly”) from the directors’ personal financial perspectives to spend shareholder dollars to keep themselves entrenched, that’s exactly what they did… winning the battle but losing the war (since the company continued to fall apart in the meanwhile.)

Application/Impact: many businesspeople and investors tend to overfocus on the rational, spreadsheet-able factors.  In the real world, incentives are a much more complicated issue, and a thorough understanding of human psychology, including intrinsic vs. extrinsic motivation (not yet written – coming soon!), will provide a more accurate perspective.

Humans vs. Econs x Local vs. Global Optimization

Local vs. global optimization – the idea that sometimes, decisions that seem best in the short-term make no sense in the long-term (and vice versa) – is one of the most overlooked mental models I know of.  It pops up all the time in business contexts, and I saw it play out.

One company I followed (and have owned shares in on and off) made consumable perforation products for oil wells.  By perforation products, I mean shaped charges and all the other parts needed to make them blow holes in a well casing to let the oil flow through… getting to, ahem, test one of those shaped charges (behind thick reinforced concrete safety barriers, of course) might have been one of the highlights of my due diligence trip.

Anyway, the company introduced some unique, high-technology products during the major oil downturn in 2016, which cost more than existing products on the market but delivered meaningful total-cost-of-ownership savings.  You would think that in a downturn, companies would be looking to maximize their return on investment in new wells, so this sort of product would be clamored for, right?

Well, not exactly.  Most managers at exploration and production companies acted like humans, not econs.  Adopting a cost-saving product would have been the rational decision, but instead, most companies optimized locally, invoking a sense of fairness: it’s a tough environment and we are not, under any circumstances, going to pay more for newfangled technology.

In fact, it seems like many companies explicitly incentivized their purchasing departments – perhaps not with extra money, but with loss aversion (do this or you’ll lose your jobs) – to push for discounts on all products.

In a purely “rational” world, the lack of meaningful traction of the new technology might have suggested via Bayesian reasoning that it wasn’t as good as it was cracked up to be.  But my sample size was larger than just this one company, and I was hearing the same thing in a lot of places, so I thought it was reasonable to not write the product off yet.

Well, my only mistake was not taking my own analysis seriously enough, because hoo boy did the product take off once E&Ps stopped reacting out of fear and started thinking a bit more rationally:

Chart created in Sentieo.

Application/impact: even well-trained business managers can succumb to “human” tendencies to discount hyperbolically and prioritize near-term fears and concerns over the long-term view.  As with most human traits, this should be viewed as neither universally good or bad, but rather selectively adaptive – and whether or not it hurts or helps you depends on how you frame it.