Feedback Mental Model (Incl Decision Journaling, Autocatalysis, Autoinhibition)

If this is your first time reading, please check out the overview for Poor Ash’s Almanack, a free, vertically-integrated resource including a latticework of mental models, reviews/notes/analysis on books, guided learning journeys, and more.

Feedback Mental Model: Executive Summary

If you only have three minutes, this introductory section will get you up to speed on the feedback mental model, including autocatalysis and autoinhibition.

The concept in one quote:

Ben Franklin... accepted that smarting rebuke. He was big enough and wise enough to realize that it was true, to sense that he was headed for failure and social disaster. So he made a right about-face. - Dale Carnegie Click To Tweet

(from “ How To Win Friends and Influence People – HWFIP review + notes)

The concept in one sentence: systems (including people) respond to feedback, which can lead to important n-order impacts; however, feedback needs to be visible (salient) and often delivered in a certain way.

Key takeaways/applications: Adding feedback to a system – or simply making it clearer – can have massive impacts.

Three brief examples of feedback, autocatalysis, and autoinhibition:

Megan, I’m afraid you get a stormy cloud sticker today. 

We outgrow smiley-face and frowny-face stickers by, like, third grade… right?  Wrong: it turns out that simple smiley or frowny faces on people’s thermostats can “nudge” their energy consumption downward by a few percent, as explored by Sunstein/Thaler in “ Nudge ( NDGE review + notes) and Don Norman in “ The Design of Everyday Things ( DOET review + notes).

A few percent may not sound like much, but over millions of households, it adds up to billions of dollars worth of energy savings.  What’s the mechanism here? It’s clear and salient feedback, translating the totally abstract cost of cranking down your thermostat into an easy-to-understand visual: Bad Deadpool.  Good Deadpool!  (There’s a social proof angle here, too.)

The good kind of central dogma.  Many of the processes in our bodies work on autoinhibitory feedback, where an action slows down the process that created it.  As Till Roenneberg notes in “ Internal Time ( IntTm review + notes), for example, a very simplified model of our circadian clock is as follows: clock gene DNA is transcribed to mRNA, which is translated into a protein.  

When enough protein has been made, it ends up inhibiting mRNA transcription, and eventually all of the proteins are “destroyed”… and the cycle starts again.

This process, underlying most of our biological functions, is overviewed well in Siddhartha Mukherjee’s “The Gene” (Gene review + notes) – a book I have severe reservations about for reasons discussed in the review, but one that’s still worth reading.

You’re a vicious kind, ‘cause you’ve lived this life… where you’re allowed to win.  Vicious cycles – or virtuous circles – are particularly important applications of “autocatalysis,” a form of feedback that works the opposite way from the process described above.  

For example, Henry Petroski explores in “ To Engineer is Human ( TEIH review + notes) the real-life analogy to Rearden Metal: railroads catalyzed the need for substantially stronger bridges and created an autocatalytic feedback loop.  Better bridges enabled more economically profitable train routes, which led to bigger and faster trains and created the need for more bridges…

But the opposite of this sort of virtuous circle is a vicious cycle.  It’s well-known, as Charles Duhigg mentions in “ The Power of Habit ( PoH review + notes), that stress can be a trigger for recovered alcoholics to fall off the wagon.

What Duhigg doesn’t mention is that alcohol massively depresses REMsleep.  A lack of REM sleep – as Dr. Matthew Walker phenomenally explains and explores in “How We Sleep ( Sleep review + notes) – decouples our rational prefrontal cortex from our emotional amygdala, thus making it even harder for us to exercise willpower and make logical decisions.

You can see the autocatalytic feedback loop here: stress causes alcoholics to drink, which leads to them being less able to make good decisions, which leads to more stress, which they solve with more alcohol…

If this sounds interesting/applicable in your life, keep reading for unexpected applications and a deeper understanding of how this interacts with other mental models in the latticework.

However, if this doesn’t sound like something you need to learn right now, no worries!  There’s plenty of other content on Poor Ash’s Almanack that might suit your needs. Instead, consider checking out our learning journeys, our discussion of the inversionsleep, or structural problem solving mental models, or our reviews of great books like “ Bellevue” ( BV review + notes), “ Deadly Choices” ( VAX review + notes), or “ Polio: An American Story” ( PaaS review + notes).

Feedback Mental Model: A Deeper Look

PC Load Letter? The !@#$ does that mean?!?! - Michael Bolton Click To Tweet

I strongly maintain that you can learn

more mental models from watching the Mike Judge classic Office Space” than you can from reading certain very popular books written by supposed intellectuals.

Michael’s onto something there: no less a luminary than the great Don Norman in “ The Design of Everyday Things ( DOET review + notes) stresses the importance of clear and timely feedback in the design of any product or system that humans will interact with.

Bad feedback can prevent us from doing our job wells… and sometimes inspire bouts of printer-destroying rage. 

There is nothing wrong with my car, according to my rockstar VW mechanic (check him out if you're in TX and own a VW / Audi) - but this stupid light has been on for, like, a year now, and comes back on every time he or I turn it off. Feedback at its worst: now I won't know if there's *actually* a problem with my engine. How hard would it be to have two separate lights, one for "major problem" and one for "minor problem?"
There is nothing wrong with my car, according to my rockstar VW mechanic (check him out if you’re in TX and own a VW / Audi – he’s highly regarded among VW enthusiasts) – but this stupid light has been on for, like, a year now, and comes back on every time he or I turn it off. Worse, it doesn’t even given any error codes when you scan it.  Feedback at its worst: now I won’t know if there’s *actually* a problem with my engine. How hard would it be to have two separate lights, one for “major problem” and one for “minor problem?”

One example of a terribly unclear piece of feedback many of us encounter from time to time is the “check engine” light on a car’s dashboard, the meaning of which spans the gamut from “I’m in a bad mood and want to annoy you” to “I AM ABOUT TO EXPLODE.”  (Yes, I’m anthropomorphizing my car.)

Among Norman’s many insights on feedback is that too much of it can actually be bad/overwhelming. 

A cascade of loud alert sounds (think video game “meltdown” levels) can, like a backseat driver saying “careful!” every five seconds, be distracting and reduce our ability to focus and respond appropriately.  

A similar story involves pilots forgetting to fly the plane because they were focused on responding to an error message – so “FLY THE PLANE” is now an item on pilot checklists.

Feedback can be useful to make  salient something that’s not salient: for example, we can’t see a computer thinking, so the little hourglass icon helps us know that the computer isn’t completely ignoring us.  (Just mostly.)

Charles Duhigg’s “ The Power of Habit ( PoH review + notes) provides a few other great examples.  Feedback in the form of foaming and tingling turn out to be critical to help consumers know that their shampoo and toothpaste is working… even though tingling and foaming don’t make your mouth, or head, any cleaner.  Duhigg quotes Tracy Sinclair, a brand manager for Oral-B and Crest Kids toothpaste:

“Consumers need some kind of signal that a product is working […] as long as it has a cool tingle, people feel like their mouth is clean.  The tingling doesn’t make the toothpaste work any better.  It just convinces people it’s doing the job.”

There are other powerful/great examples (such as Febreze) in Duhigg’s book; feedback turns out to be critical in  habit formation.

By now, the idea of clear feedback should be pretty clear (see what I did there?)  So let’s move on to the interactions.  There are a number of them around the site – for example, in  zero-sum games, in empathy, and elsewhere – so here we’ll focus on a few of the big ones.

Feedback Incentives x Salience x Trait Adaptivity x Local vs. Global Optimization

Mental models are always more interesting when you start integrating multiple angles.  Take feedback.

One clear, obvious type of feedback is incentives: if someone gets paid (whether in money, reputation, or dopamine) to do something, they’ll keep doing it.  If someone gets punished (whether in money, reputation, or dopamine) for doing something, they’ll usually stop doing it.

Sometimes these incentives can create little autocatalytic feedback loops.  For example, in “ How Doctors Think ( HDT review + notes), Dr. Jerome Groopman examines feedback interacting with incentives to drive journal publication of researchers’ articles:

“When researchers have rigorous, groundbreaking data to announce, they try to publish in one of the top-tier journals; by the same token, these journals seek out epochal reports to add to their luster.”

Jordan Ellenberg goes deeper into this topic in the witty “ How Not To Be Wrong ( HNW review + notes), exploring how the “replication problem” is partially driven by the fact that it’s not sexy, lucrative, or intellectually stimulating to merely confirm other people’s research.

So far, so obvious.  But here’s where it starts to get interesting: let’s add in the idea of selection pressure, part of the trait adaptivity model.  People – and organizations – will naturally reinforce traits that are adaptive in a given environment, which makes total sense as long as the environment doesn’t change, but can wreak havoc if it does.

One of the classic business examples is disruption: it’s not so much that businesses that fall by the wayside are stupid dodos destined for the dustbin, but often, as Clayton Christensen explores in “ The Innovator’s Dilemma ( InD review + notes), simply making the right decisions locally and ending up in the wrong place globally: local vs. global optimization.

Michael Mauboussin, for example, argues cogently in “ The Success Equation ( TSE review + notes) that companies can:

“fall prey to organizational rigidities […] exploiting known markets requires optimizing processes and executive effectively, and leads to reliable, near-term success.”  

Part of the problem is that given the way that disruptors grow exponentially – or even faster – e-commerce isn’t a threat for years and years and years, until suddenly it’s pretty much the only threat that matters.  It’s amazing how quickly, over the early 2010s, e-commerce started to extract a toll on brick-and-mortar retailers.

This is not merely a business problem, but an ecological one: Geoffrey West gives the example in “ Scale ( SCALE review + notes) – a great book on nonlinearity – of how resource constraints can appear very suddenly.

A bacteria population doubling every minute from 8:00 AM until noon won’t have reached half its final size until 11:59 A.M. – roughly 99.5% of the way there.  If there’s a hard resource constraint at that time, it’ll seem to come upon you very suddenly.

This can be a real-world ecological context: Mark Kurlansky’s “Cod” explores how harvests were so bountiful that people thought the supply of cod was inexhaustible… until suddenly, it wasn’t.

Now let’s turn to our keystone argument: one regarding human beings (always the most interesting context.)  Megan McArdle makes the point in “ The Up Side of Down ( UpD review + notes) that high school often gives highly intelligent kids very bad feedback: they don’t develop study skills, or the need to apply themselves, because they can coast through their classes on natural intelligence.

That environment, in other words, selects against effort – more technically, laziness is not non-adaptive in that environment (even if it’s not actively adaptive.  Incentives take care of the rest: why would you bother studying well if you can just goof off for most of the time, cram the night before, and ace the test?  It’s a completely reasonable response – one I engaged in, myself, for much of my life – and it’s terrible feedback, because eventually (college, graduate school, the workforce) you reach a level where you do need to put in effort.

Similar arguments, of course, apply to athletes – one well-known phenomenon in the NFL is rookies having to go up a big technique learning curve, because they spent much of their high school and college careers dominating on the basis of superior strength, speed, or other elements – but that was bad feedback.  All of a sudden, on a level athletic playing field, techniques like route-running, pocket footwork, and pass-rush moves become much more important.

Where am I going with all of this?  Toward the end of the book, McArdle provides a phenomenal discussion of an innovative parole system in Hawaii.  As McArdle puts it, given under-resourced parole officers and many criminals’ upbringing in homes without consistent feedback, their lives often seemed to proceed like this:

“nothing… nothing… nothing… nothing… nothing… bam!  Five year prison term.”

That’s obviously an incredibly difficult sequence to learn from.  Feedback is completely absent. McArdle explores how a Hawaiian system that made the feedback more clear – via scheduled drug tests and check-ins that pretty much ensured you do something bad, you’re punished for it – resulted in dramatically better results for both the system (law enforcement, taxpayers) and the criminals (reduced recidivism.)  

The take-home here is that providing inconsistent or unclear feedback is ineffective; McArdle argues that most people focus more on “punishment” and less on “certain” – in fact, fairly “innocuous” punishments, applied with great certainty, seem to be more effective feedback than randomly/rarely enforced but very severe punishments – simply because the former are easier to learn from.  

McArdle’s book is a clinic on feedback: given the margin of safety baked into much our our lives, she notes that sometimes we don’t have clear feedback.  (I explore this more in the margin of safety mental model.)

In general, making feedback more salient can work wonders: I discuss a hilarious example in the salience mental model, from Thaler’s aforementioned “ Nudge ( NDGE review + notes), which explores how Thaler managed to – for a very low cost – get his graduate student to submit his thesis on time.  Sometimes it’s not how big your incentive is, but how visible you make it.

Feedback Memory x Structural Problem Solving: Decision Journaling

Moving on from the above discussion about how hard it is to learn from inconsistent punishment, Philip Tetlock makes much the same case in reverse: it’s easier to learn from natural rewards like whether or not a pass is completed).  He touches on this in the very important, very thought-provoking Superforecasting” ( SF review + notes).

The premise of Superforecasting is the question: why are expert predictions so often inaccurate, and how can ordinary individuals perform better?  Although the thought process involves a lot of models – probabilistic thinking and multicausality, to name a few – one of the major ones that seems so trivial/obvious that it’s easy to overlook is simply: writing stuff down.

One of my favorite mental “files” – not cataloged on this site explicitly – is a running list of “science that doesn’t seem scientific,” in a good way.  

What I mean by that is that while many people conjure up images of PhDs in white lab coats using high-tech equipment to make very precisemeasurements, real science is often much more pedantic and accessible, as books ranging from Jennifer Ackerman’s “ The Genius of Birds ( Bird review + notes) to Dr. Matthew Walker’s “ Why We Sleep ( Sleep review + notes) overview.

One of my favorite stories of all time comes from the latter.  Do you know how scientists determine how deeply worms are asleep?

It’s not a fancy digital electrode or some sort of worm-sized CAT scan.  It’s, verbatim:

“Defined by their degree of insensitivity to prods from experimenters.”

Setting aside the hilarious visual of neuroscientists standing around poking worms – “he’s still asleep.  poke him harder!” “well, okay, but I don’t want to squish him!” – the serious takeaway is that it doesn’t take a PhD or a million bucks worth of lab gear to “do science.”  We can do it with a napkin and a cheap pen.

As Don Norman says in the aforementioned “ The Design of Everyday Things ( DOET review + notes),

|on remembering things|: Writing is a powerful technology: why not use it? - Don Norman Click To Tweet

It turns out that this is one of the key insights to making feedback more salient in our own lives.

If you’ve spent any time on this site at all, you’ve probably encountered, more than once, my views on memory.  They can be summed up as follows:

Relying on your memory to safeguard important information is like relying on your neighbor’s Chocolate Lab to safeguard your delicious steak. Just don’t do it. Click To Tweet

The previous sections dealt with simply making external feedback more visible, rather than invisible; this section deals with helping feedback that is visible STAY visible.

Tetlock’s book touches on the fact that since we often don’t write down and evaluate our forecasts, we’re often in the dark as to their actual accuracy.  A table-banging pundit who’s been completely and totally wrong on his last five market pronouncements will thus continue undeterred on his sixth go-around.

A large body of psychological research – discussed in the memory mental model and elsewhere – demonstrates the existence of hindsight bias, or what’s known as creeping determinism: after the fact, what was uncertain ahead of time seems totally obvious in retrospect.

This is not something we can control via agency, but rather a natural cognitive bias, like contrast bias or salience, that we have to be aware of and adjust for through either cognitive or structural problem solvingmeans.  Books like Tavris/Aronson’s “ Mistakes were Made (but not by me) ( MwM review + notes) do a great job of driving this home.

For example: we didn’t know if we should do Greek or Mexican for dinner; now that we’re at the Mexican place and there’s a line out the door, of course we should’ve done Greek – this place is always packed on a Friday.  (Note: we’d be making the same argument, in reverse, if we were waiting to get into the Greek place.)

My favorite example comes from Richard Thaler’s “ Misbehaving ( M review + notes), where he reframes some classic principal-agent local vs. global optimization problems with a somewhat less moralistic view: a real-world example involving a team of 23 insurance executives demonstrates that simply faulty memory, or hindsight bias, played a large role in creating suboptimal decisionmaking in that organization.

It sounds obvious, but simply making a point to write down – in as much detail as is feasible – why you made a decision, and what information went into it, makes it much easier to later come back and evaluate whether or not that decision was appropriate.

Some more discussion is available in the process vs. outcome / luck vs. skill model, but even taking this simple step will likely make you meaningfully more effective.  (It’s definitely done so for me.) It’s an approach used not just by successful professional investors and business managers, but it’s even part of the empirically-validated cognitive behavioral therapy approach.

Application / impact: writing things down is a scientifically-validated way to make better decisions: by being able to evaluate our decisions ex post facto without the misleading and incomplete reconstructions of memory, we obtain much clearer feedback to incorporate and respond to.