Tavris/Aronson’s “Mistakes Were Made (but not by me)”: Book Review, Notes + Analysis

Poor Ash’s Almanack > Book Reviews > Effective Thinking > Psychology

Overall Rating: ★★★★★★ (6/7) (standout for its category)

Per-Hour Learning Potential / Utility: ★★★★★★ (6/7)

Readability: ★★★★★★ (6/7)

Challenge Level: 2/5 (Low) | ~240 pages ex-notes (400 official)

Blurb/Description: How is it that we can see everyone else’s mistakes clear as day… but fail to recognize our own?  Tavris and Aronson take a journey through the topics of memory and cognitive biases to find the answer.

Summary: This is one of those books that’s been on my shelf for years because I figured I already knew most of what it said from other reading.  It was a pleasant surprise to finally read; through the lens of “cognitive dissonance,” the two authors provide a breadth of examples exploring how our need to protect our egos/identities leads to selective information processing and selective memory.

Highlights: First, if anyone has any misconceptions about our memories being picture-perfect, this book will thoroughly dispel them.  

Second, I think the material covered here is broad enough that it will force most readers, at some point, to confront the fact that self-justification is an everyone problem (including ourselves), rather than just an “other people” problem.

Third, the book is fairly compact and while the writing isn’t flashy, it’s highly readable.

Lowlights: There are a few drawbacks and this is one of the weaker 6-star books on the site (it would have been a high 5, but I thought the topic was important enough to merit the higher rating).  

The first is that Tavris/Aronson sort of take the “ man with a hammer” approach: in many parts of the book, I think there are other psychological dynamics at play that the two either don’t highlight or only touch on summarily; this is perhaps to be expected from a book focusing on one narrow aspect of cognition, but in some cases (such as the cursory reference to the Milgram experiment, and the discussion of spouses’ estimates of their own contribution to housework), I thought providing a broader analysis would have been helpful.  (See the notes.)

Second, I think the book is a little front-heavy; a lot of time is spent beating various examples (politicians lying, repressed memories) nearly to death, and the book dances around the “ growth mindset idea for a while before only giving it (in the context of the book) a little bit of lip service at the end.  In other words, they spend a lot of time talking about the problem and relatively less about the solution, in contrast to, say, Tetlock’s Superforecasting (review + notes) or Thaler’s Misbehaving(review + notes), which both do a phenomenal job of providing deep theoretical context on the cognitive biases but also integrate usable, practical lessons therefrom.

Mental Model / ART Thinking Points:  confirmation biasmemorysaliencetrait adaptivity,incentivessunk costsculturepath dependencyschemaoverconfidenceloss aversionfeedback, ego, contrast biasschemareciprocity biaswillpowerstructural problem solvingsalience,storytellingsample size, authority bias, stressempathyutilitybottleneck,

You should buy a copy of Mistakes Were Made (but not by me) if: you want a thoughtful exploration of how contrast biasmemoryconfirmation bias, and several other mental modelsintersect to prevent us from accurately identifying our own mistakes.

Reading Tips: If you feel like a section is starting to get repetitive, feel free to skim; most of the same points are punched home in the various areas of the book, so if one topic is less interesting to you, you probably won’t miss anything.

Second, consider taking some time to watch a video demonstration of the Reid Technique (such as the one my friend sent me a link to, see the notes to pages 140 – 144.)  I think it’ll really punch home the “ lollapalooza” effect of many of the factors discussed by Tavris/Aronson – as well as some that aren’t.

Pairs Well With:

The Seven Sins of Memory by Daniel Schacter ( 7SOM review + notes).  This is my favorite book on how memory works overall.

Superforecasting by Philip Tetlock ( SF review + notes).   Superforecasting goes into the ideas ofstorytellingmemory, and confirmation bias via a different angle: why do we consistently make bad predictions, and how can we do better?

Misbehaving by Richard Thaler ( M review + notes).  The best book around on cognitive biases, full stop.  (It’s really funny and really practical.)

Reread Value: 4/5 (High)

More Detailed Notes + Analysis (SPOILERS BELOW):

IMPORTANT: the below commentary DOES NOT SUBSTITUTE for READING THE BOOK.  Full stop. This commentary is NOT a comprehensive summary of the lessons of the book, or intended to be comprehensive.  It was primarily created for my own personal reference.

Much of the below will be utterly incomprehensible if you have not read the book, or if you do not have the book on hand to reference.  Even if it was comprehensive, you would be depriving yourself of the vast majority of the learning opportunity by only reading the “Cliff Notes.”  Do so at your own peril.

I provide these notes and analysis for five use cases.  First, they may help you decide which books you should put on your shelf, based on a quick review of some of the ideas discussed.  

Second, as I discuss in the memory mental model, time-delayed re-encoding strengthens memory, and notes can also serve as a “cue” to enhance recall.  However, taking notes is a time consuming process that many busy students and professionals opt out of, so hopefully these notes can serve as a starting point to which you can append your own thoughts, marginalia, insights, etc.

Third, perhaps most importantly of all, I contextualize authors’ points with points from other books that either serve to strengthen, or weaken, the arguments made.  I also point out how specific examples tie in to specific mental models, which you are encouraged to read, thereby enriching your understanding and accelerating your learning.  Combining two and three, I recommend that you read these notes while the book’s still fresh in your mind – after a few days, perhaps.

Fourth, they will hopefully serve as a “discovery mechanism” for further related reading.

Fifth and finally, they will hopefully serve as an index for you to return to at a future point in time, to identify sections of the book worth rereading to help you better address current challenges and opportunities in your life – or to reinterpret and reimagine elements of the book in a light you didn’t see previously because you weren’t familiar with all the other models or books discussed in the third use case.

Page 2: This book goes a step beyond confirmation bias (which, generally, means seeking out or noticing only evidence that confirms your existing view.)  Tavris and Aronson are focused on a different phenomenon: why do we refuse to acknowledge our own mistakes, even when presented with undeniable proof thereof?

Page 6: Tavris/Aronson are more interested in the underlying mechanism rather than the scope or correctness of self-justification.  In a fantastic metaphor, they note that the area between conscious lying and unconscious self-justification is:

“patrolled by that unreliable, self-serving historian – memory.”  

As with Munger commenting on the Milgram study, I’ll occasionally add some psychological factors that I think the authors don’t focus on (as I do for many other books).  

Here, Tavris/Aronson (hereinafter T/A) discuss how husbands and wives tend to overestimate the percentage of housework they do. T/A attribute this to self-serving bias / memory, but I think that salience bias also plays a role here.  When I spend a few hours making dinner or cleaning, I remember it… when someone else in the family does so, I probably don’t notice at all.

The scary part is that T/A note that over time:

“we may come to believe our own lies, little by little.”  

This is a clear example of  contrast bias.  I’ll talk a little bit later about “just-noticeable differences,” as discussed by Richard Thaler in “ Misbehaving ( M review + notes).

Page 8: interesting notes here on culture / status quo bias; see also Feynman on pages 184 – 185 of The Pleasure of Finding Things Out ( PFTO review + notes) and Sunstein/Thaler on pages 58 – 59 of Nudge (review + notes).  T/A allege that self-justification plays a factor here.

Page 9: T/A note that self-justification, like most other cognitive biases, exists because it’s adaptive: considering that we’re prone to mistakes, if we kept track of every little mistake and beat ourselves up over them, we’d be basket cases.

That doesn’t mean we shouldn’t counteract it; T/A note:

Mindless self-justification, like quicksand, can draw us deeper into disaster. - Carol Tavris + Elliot Aronson Click To Tweet

Pages 13 – 15: T/A believe that cognitive dissonance – i.e. the tension between two psychologically inconsistent cognitions – drives self-justification, by forcing us to discard one in favor of the other.

They note that cognitive dissonance interacts with, and sometimes overpowers, the classical “operant conditioning” incentives model of rewards good, punishment bad.  As we’ll see, in some cases, going through a lot of pain can lead to people being happier with the end result or at least showing commitment bias to it, a la hazing.

Page 17: Commitment bias can be viewed as an example of path dependency interacting with schema: T/A cite an experiment demonstrating that those who went through a severe / more embarrassing “initiation rite” (in a lab) rated a group totally differently, and far less objectively, than those who had not gone through one.

They point out an important nuance here: it’s not that pain is good, but that rather when you voluntarily submit yourself to a lot of discomfort to get to a certain end, you will value that end much more.  If you didn’t, you’d have cognitive dissonance: why did I put myself through all that for… this?

Pages 18 – 20: T/A note confirmation bias and overconfidence here; they cite a study that finds that the logical-processing areas of our brain literally shut down when we’re exposed to conflicting information.  In fact, we can actually grow more confident in our own beliefs after being exposed to conflicting evidence.  (We return to how to get around this unfortunate factor later.)

Page 22: This is a good example of the commitment bias effect discussed on page 16: immediately after making a costly or irrevocable decision, we like that decision a lot better.

Again, I think there are other psychological models that apply here: specifically, I’d cite loss aversionand the endowment effect, both discussed extensively in Thaler’s Misbehaving (M review + notes).   Commitment bias, of course, does explain why we’d like a car better after we’ve purchased it, but it doesn’t explain why we’d place an inexplicably high value on a generic mug or pen that we were handed by an experimenter five minutes ago.

Page 26: Discussing the idea of “catharsis,” T/A note that conventional wisdom is wrong: decades of experimental research find that expressing anger just makes us angrier.

When catharsis takes the form of direct aggression, it creates a feedback loop that leads to more aggression, thanks to the need to justify the first hurtful act.

This seems like a good place to reference the  stress / humor / gratitudemental model.  Two good books that touch on this issue are Laurence Gonzales’s “ Deep Survival ( DpSv review + notes), and Shawn Achor’s “ The Happiness Advantage ( THA review + notes).

The former talks about how humor can deactivate stress and reduce amygdala activity; the latter extensively examines how cultivating practices like gratitude and  empathy can make us more emotionally stable.

Pages 28 – 29: This can be inverted: when you do something nice for someone, you think more highly of them.  T/A cite the famous Benjamin Franklin example of asking to borrow a book from someone.

Benjamin Franklin actually discusses that in “ The Autobiography of Benjamin Franklin ( ABF review + notes).

“He that has once done you a kindness will be more ready to do you another, then he whom you yourself have obliged.” - Benjamin Franklin Click To Tweet

It’s a worthwhile lesson to remember.

Of course, it works via  inversion too: something to be careful of…

Page 30: The underlying mechanism here is that when we’re faced with a blatant disparity between our identity (ego) and external evidence, it is much less painful to irrationally interpret the external evidence rather than take the ego blow.

T/A also cite Philip Tetlock’s research.  There are lots of useful parallels between “ Superforecasting ( SF review + notes) and Mistakes were Made.  Specifically, one of Tetlock’s “mantras” is:

For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. - Philip Tetlock Click To Tweet

In the context of Mistakes were Made, obviously cultivating a  habit of testing beliefs against reality helps mitigate against  overconfidence, etc.

Page 31: T/A note that the cognitive dissonance mechanism applies to both people with high and low self-esteem.

Page 33: T/A bring up an interesting “pyramid” model of self-justification, which goes back to the feedback loop referenced earlier.

Their view is that the first action in a direction tends to create strong reinforcement for continuing to move in that direction, thanks to our need to justify the actions.

I would argue there’s also a  local vs. global optimization and hyperbolic discounting angle: it’s easier to keep moving down the pyramid than up it.  See, here look: the top of the pyramid is where you want to be, but at any given moment it’s easier to go down…

As discussed in other books ( Ordinary Men, for example – OrdM review + notes), this blurs the lines between “good guys” and “bad guys.”

Tavris/Aronson circle around the growth mindset for much of the book and get to Dweck at the end, but I’ve personally found it to be the most effective antidote to self-justifying activity; the aforementioned Tetlock references it on pages 174 – 176 of Superforecasting (SFreview + notes) as well, and discusses the identity phenomenon elsewhere with a metaphor of Jenga blocks (you don’t want to pull out the bottom Jenga blocks).  I’d be curious to see if there’s any research linking the two.

Why is it so effective?  Well, when you have a “fixed mindset,” acknowledging “I made a mistake” translates to “I AM a mistake.”  As Brene Brown discusses in I Thought It Was Just Me, that can turn “guilt” into “shame,” which isn’t healthy.

On the other hand, when you have a growth mindset, you take the seemingly-paradoxical approach espoused by philosopher W. V. O. Quine via page 429 of Jordan Ellenberg’s How Not To Be Wrong (HNW review + notes):

“To believe something is to believe that it is true; therefore a reasonable person believes each of his beliefs to be true; yet experience has taught him to expect that some of his beliefs, he knows not which, will turn out to be false.  

A reasonable person believes, in short, that each of his beliefs is true and that some of them are false.”

I also think – now I’m just speculating – that being a low- ideology person who approaches the world as a “fox” and not a “hedgehog” probably helps as well.  If your life is a latticework, replacing one incorrect node on the latticework isn’t that terribly difficult; if your life is built on a pillar of one single idology, it’s hard to replace.

Page 37: contrast bias comes into play here; the Milgram experiment is cited.  T/A’s takeaway (obviously not a comprehensive interpretation) is that while few people would go from 0 to “XXX DANGER,” if you go from 0 to 10 it’s easier to go from 10 to 20 and so on all the way up the chain.  Boiling a frog slowly, basically.  See Thaler on “just noticeable differences” in “ Misbehaving ( M review + notes).

Page 38: T/A believe that a richer understanding of how and why our minds work as they do” and being mindful of our behavior and the reasons for our choices” can help us avoid self-justification. 

Disaggregation.

This reminds me of the Richard Feynman bit in “ The Pleasure of Finding Things Out ( PFTO review + notes) where he talks about how deconstructing a flower makes it more, rather than less, beautiful…

Page 41: on incentives, and the fact that we all have a biased schema in one way or another

Page 42: T/A cite social psychologist Lee Ross, who describes a phenomenon he calls “blind realism” – which T/A summarize as the inescapable conviction that we perceive objects and events clearly, ‘as they really are.’”

Obviously, we don’t.

Page 43: framing – people’s support of any given policy has a lot to do with which party proposed it.  (This should not come as a surprise!)

Page 46: contrast bias and incentives again; see also Groopman in “ How Doctors Think” ( HDT review + notes).

Page 47!: T/A discuss the shift in perception of science from a knowledge-driven enterprise to a commercial enterprise that is discussed by others, including Meredith Wadman in the thoughtful The Vaccine Race (TVR review + notes) through the lens of the Hayflick vials.  Note, however, that the famous Salk quote – “could you patent the sun?” – tells only half the story.  David Oshinsky’s Polio: An American Story (PAASreview + notes) tells the other half, including that Salk later did, ahem, take a more self-interested view on the whole patent thing.

Again, see Groopman.

Pages 50 – 51: T/A review the horrible case of Andrew Wakefield’s fraudulent autism vaccine papers (an example of mistaking correlationfor causation ).  Wakefield, of course, had incentives in the form of nearly a million-dollar payout from lawyers.

I didn’t realize this until recently, but they actually cite Dr. Paul Offit’s “ Deadly Choices ( VAX review + notes) in the endnotes.  Deadly Choices is a great read on  salience and a number of other models, not to mention a profoundly important public health book, given that vaccines are extremely safe, essentially costless, and extremely effective at preventing serious illnesses.

Pages 52 – 53: T/A bring up the reciprocity bias here as it relates to gift-giving.  (Groopman again!)

Page 57: Of course, the interesting thing about prejudices is that we see them clearly in other people, but not in ourselves.  (I’m working on it.) We have prejudices because categories – or, in common parlance, “stereotypes” – save effort and generally help us make better decisions.

Pages 58 – 59: Tribalism and us-vs-them thinking: T/A attribute this to our need for belonging and identity.  There’s a trait adaptivity angle here too.

Page 61: I laughed at the bit about lobsters vs. insects.  (Lobsters are, essentially, giant insects, as this guy humorously points out.)

Page 63: willpower depletion comes up here: it makes us more honest!  In perhaps a not-so-nice way; our prejudices are more likely to come out.

Page 66: on echo chambers: T/A provide the advice of:

we need a few trusted naysayers in our lives, critics who are willing to puncture our protective bubble of self-justifications and yank us back to reality if we veer too far off.”  

This is, in some senses, the structural problem solving approach to having a narrow schema: keep people around who have a different schema.  They cite Abraham Lincoln as one of the few politicians who was willing to do this.

Pages 72 – 73: Lots of good stuff about memory here, including the vividnesssalience bias, and so on.  T/A notes that metaphors of our memories that draw on computer analogies are:

 “popular, reassuring, and wrong.  Memories are not buried somewhere in the brain. As if they were bones at an archaeological site.”  

 Memory is “reconstructive” – recreated every time we access it – such that it’s easy to confuse one thing with another, and, importantly, as time passes, we face “source confusion” – i.e. we can’t distinguish memory from other information from elsewhere.  Daniel Schacter’s “ The Seven Sins of Memory ( 7SOM review + notes) provides great detail  on how and why this occurs.

This happened to me just while I was writing these notes: remember the earlier quote from W. V. O. Quine?  I could’ve sworn it was in Tetlock’s Superforecasting… because that seemed like where it should be from.  Nope. Instead, it was from Ellenberg’s How Not To Be Wrong (HNW review + notes).

This, incidentally, is the source of much accidental plagiarism.  As time passes, material we find interesting or that we use a lot becomes familiar, whereas the original source (likely viewed only once, or a few times) fades.  

So we think of a lot of ideas as our own, when, really, they’re someone else’s. I was shocked when I reread The 7 Habits of Highly Effective People ( 7H review + notes) and realized that many core parts of my belief system – that I thought I’d come to somewhat independently – were pretty much straight from Covey, and things that I hadn’t thought about prior to reading that book.

This just reinforces the importance of surrounding ourselves with good influences rather than bad ones.

Edit“Private: Dummy Page”

Pages 76 – 77: Lots of important stuff here.  First, T/A address the phenomenon of how we perceive our parents and their influence on our lives; second, they address our storytellingtendency of creating a coherent narrative that may or may not always be correct thanks to confirmation bias.

The money quote here is T/A citing some research by Barbara Tversky (wife of the late Amos) and Elizabeth Marsh, which finds – chillingly – that over time, we remember false stuff we add and forget real stuff we don’t note.  There’s a big hindsight bias element here, of course, and it’s why decision journaling has power.

Pages 82 – 84: We can reconstruct astonishingly elaborate false memories that “nonetheless feel vividly, emotionally real.”

Pages 86 – 87: T/A touch on suggestibility and “imagination inflation” – if we’re repeatedly asked to imagine something, and it is suggested that it actually happened, it can lead to memories being formed.

Pages 93 – 94: T/A:

An appreciation of the distortions of memory, a realization that even deeply felt memories might be wrong, might encourage people to hold their memories more lightly, to drop the certainty that their memories are always accurate, and to let go of the appealing impulse to use the past to justify problems of the present.”

Bad memories can help justify a lack of agency.  (T/A go through a lot of stories about how “repressed memories” are more lor less bullshit.)

Pages 99 – 101: Okay, this is fascinating on the culture thing.  The discussion of the general nonsense of recovered memories, and the social proof driven epidemiology of trends, and the suggestibility of childrens’ memories, is interesting.  But even more interesting is the fact that the (flawed!)

“assumptions that ignited [the epidemic] remain embedded in popular culture.”  

For example, I and many of my friends jokingly talk about repressing traumatic experiences all the time.  None of us were even sentient when this epidemic was going on.

Page 108: Another money quote from research psychologist John Kihlstrom, on overconfidence:

“The weakness of the relationship between accuracy and confidence is one of the best-documented phenomena in the 100-year history of eyewitness memory research.”

Page 109: the bit about Freud is funny; what a terrific theory!”  T/A note that Freud’s theory is unfalsifiable by design.  See also Tetlock, and Ellenberg on conspiracy theories, and Dr. Matthew Walker’s “ Why We Sleep ( Sleep review + notes), which contains a brief but funny anecdote of how he demonstrates to people that Freudian psychoanalytic dream interpretation is bullshit.

Pages 110 – 111: Tavris/Aronson, discussing psychotherapy and “repression,” note that one danger is a “closed loop” – observation andintuition without testing can lead you in the wrong direction.

See  scientific thinking, and the importance of “exposing ideas to reality.”

Thaler discusses a few examples of this in “ Misbehaving” ( M review + notes).  For example, it was long-assumed that firms were profit-maximizing… turns out they’re actually revenue-maximizing.

Similarly, Thaler and some collaborators set out to determine how cabbies actually respond to higher/lower-wage days.  

Astonishingly, the opposite of classical economics holds true: some cabbies (particularly newer ones) tended to work on any given day until they’d made a targeted amount of money, then quit, paradoxically leading to working more hours on low wage-per-hour days and fewer hours on higher-wage-per-hour days.

More experienced drivers tended to not do this, but Thaler points out there are, of course, behavioral factors at play here: justification to a spouse, as well as an external self-control measure… sort of the same (my analogy) as a calorie counter on an elliptical.

Page 112: Richard McNally, a psychology professor at Harvard and an author (Remembering Trauma), finds that far from being repressed, traumatic memories are often intrusive.  (See also Schacter’s “Seven Sins of Memory – 7SOM review + notes).

Pages 114 – 117: Discussing clinical evaluation of children who were (and were not) molested, T/A note that many evaluators often fell prey to confirmation bias as well as survivorship bias, much the same way Ellenberg discusses in How Not To Be Wrong ( HNW review + notes) from the “where are the bullet holes” through the Baltimore stockbroker problem.

Many of the “symptoms” believed to be common to molested children (fear of the dark, bedwetting, sexual curiosity) are in fact close to as common in children with no history of abuse; conversely, many children who’ve been abused show no appreciable symptoms whatsoever.  Kids’ memories are often suggestible (as discussed earlier).

Thus, it’s difficult to diagnose with any certainty whether a child has or has not been abused… and yet T/A note that many therapists specialized in this area and were “extremely confident” in their assessments and would, in fact, persist in badgering children until they agreed with the conclusions the evaluators had already come to.  T/A’s conclusion on  overconfidence and  scientific thinking:

“Science is a form of arrogance control.”

Page 119: When led using suggestions, social proof, and reinforcement, three-year-olds agreed with 80% of false statements, and 4-6 year olds agreed with half.  They even formed memories of completely false events.

Pages 129 – 131: It’s not just therapists; police officers and prosecutors are subject to the same psychological phenomena.  T/A review various statistics (and chilling anecdotes) on false confessions.

Pages 134 – 136: T/A discuss the potential tendency of both detectives and potential jurors to make an initial snap judgment, creating a story about what happened, and then ( confirmation bias) only seeking out evidence that confirms that story.

They don’t (here) get into the antidote, but I think the answer is the “ counterfactual thinking” probabilistic thinking) and alternative-hypothesis idea discussed in Tetlock’s Superforecasting ( SF review + notes) as well as Howard Marks’ The Most Important Thing (MIT review + notes): by avoiding committing to a “story” or “belief” for as long as possible, we maintain more objectivity.

There’s also a famous Darwin quote on this, I think.

Pages 140 – 144: T/A discuss the “Reid Technique” for interrogation, which they describe as a “self-fulfilling prophecy” created by the interrogator’s “presumption of guilt.”  And holy shit T/A were not kidding.  

I touched base with a friend of mine who graduated from a top-tier law school and has worked in the court system.  His take:

“The Reid Technique is very questionable.  It can convince emotionally compromised or unintelligent people to crack and confess to things they didn’t do.  It’s creepy to watch.”

He sent along this instructional video, which is terrifying on several levels.  First, confirmation biasis clearly on display; the presenter suggests that completely normal behaviors (such as suspects trying to address a police officer politely rather than just shouting at them) are signs of guilt.

Second, many of the core elements of the “technique” are, basically, badgering – refusing to let people answer when they aren’t saying something you want, and, in the key “alternative question” step, posing one of those lose-lose binary questions a la “did you stop beating your wife?”  (In this case: did you plan to stab this guy or did it just happen?  Suspect: I didn’t plan nothing. Interrogator: great, so it just happened!)

Even if you don’t have time to watch the full video, watch parts of it… again, you’ll see that T/A aren’t kidding when they say that “by the logic of this system, the only error the detective can make is failing to get a confession.”

Page 146: T/A note that most innocent people are not aware that police are allowed to lie to them; this creates cognitive dissonance between the authority bias of the police telling you something, and the truth of not having committed the crime.

(Also, of course, the Reid Technique makes it clear that the interrogated person should be in uncomfortable circumstances, so there’s stress induced factors.)

Pages 152 – 155: So what’s the solution to all of this?  To introduce training about cognitive biases and discourage overconfidence.

Again, similar to the “confidence is sexy” points I discuss in relation to Superforecasting, T/A note that probabilistic testimony is often looked down upon many judges, jurors, and police officers prefer certainties to science.”  

I

Incentives x  overconfidence.  T/A suggest, via law professor Andrew McClurg, an AA-like social proof program where young police officers are paired with highly ethical older officers.

Pages 159 – 161: Honestly, I would’ve attributed “keep your eyes wide open before marriage, and half shut afterward” to Munger.  Apparently it’s Ben Franklin!

T/A start talking about marriages, and how cognitive dissonance works there.  They take an example from Andrew Christensen and Neil Jacobson’s book Reconcilable Differences.

Pages 161 – 165: Have I taken sides yet?  Yeah – Debra’s. Frank sounds like a classic guy without much empathy.  🙂

(Tongue-in-cheek, of course… storytelling and confirmation bias at play.)

Pages 166 – 168: T/A’s punchline here is that self-justification in the context of relationships can come in two forms: “I’m right and you’re wrong as well as “Even if I’m wrong, too bad; that’s the way I am.”  

People also develop “implicit theories” of others’ behavior and only seek out confirming evidence.

Page 169: T/A mention fundamental attribution error here without explicitly using the term: our tendency to attribute our successes to skill and failures to circumstance, and vice versa.

Page 171: T/A get close to fixed mindset here but not quite.

Also, the funny thing about Mistakes were Made (but not by me) is that it’s easy to read the book with “automatic thoughts” of the described phenomena being “other people problems” – i.e. reading the book as an exercise in confirmation bias, explaining why everyone else is so stupid… while overlooking our own flaws.

So, this is one of those pages that strikes a little too close to home for my liking.  Ironically, when I first read this book, I was dealing with a very intense, very emotional “breakup” type situation with someone who I was previously very close to but realized it was a toxic relationship that was dragging me down.  (It was a friend, not a romantic type relationship.)

For months, I’d been bending over backwards to work around this person’s idiosyncrasies and taking all of the blame for anything/everything that went wrong (which was a lot, constantly – although, thanks to cognitive dissonance reduction, I refused to recognize it.)

When I eventually hit the critical threshold of “can’t take any more abuse,” pretty much the exact phenomenon described here by T/A took hold: if I were to further interact with the person, and to some extent, my last interactions with the person, were (at least partially) “no longer an effort to solve a problem or even to get the other person to modify his or her behavior; it’s just to wound, to insult, to score.”  This leads to “the most destructive emotion a relationship can evoke: contempt.”

Yup.  If the aforementioned person were to reach out to me with anything other than a groveling, abject apology, my response would start with something along the lines of “you’re a worthless pile of subhuman dog [censored] and you can go [you know what.]”  And go downhill from there.

Contempt perfectly describes how I feel about this person.  Sometimes when I’m not thinking about anything in particular, my mind starts penning a particularly vicious email that I feel like I’d love to send but I know wouldn’t actually make me feel any better.

Is that something I’m proud of?  No, of course not. Do I know what the solution is, other than extricating myself from that sort of toxic relationship far earlier next time?  Not really. I’m still thinking about it.

But I think the first step to getting maximum mileage out of this book is adopting the growth mindset and taking a long, hard look at the parts that apply to ourselves… especially the ones that are really uncomfortable.

Anyway, there are parallels between what T/A discuss here, and the growth mindset, and some of Brene Brown’s stuff, insofar as criticizing people’s identity rather than their behavior is usually counterproductive.

Page 173: Research by John Gottman finds that the “magic ratio” of positive to negative interactions is 5:1.  Similar conclusions are cited in Shawn Achor’s “ The Happiness Advantage ( THA review + notes).

Pages 176 – 178: Yay hindsight bias!  T/A here discuss the “revisionist power of memory,” noting that divorced couples often have trouble remembering why they married in the first place… they also discuss the phenomenon of staying in relationships by justifying that “it’s really not that bad.”

See also Thaler’s comments on the Coase theorem with regards to negotiation out of court… humans vs econs.

Pages 180 – 183: Again, here, Tavris/Aronson kinda get to the growth mindset without making it explicit: their “model couple” doesn’t allow self-justification to get in the way of progress.  They have empathy for each other and try to compromise.

In a sense, it’s a utility focused paradigm: it’s not about who is right or wrong, but rather what is going to work or not work?

Page 186: The Diane/Jim bit here is, again, one of those things that strikes too close to home for me: the kind of “genuine, heartfelt apology” I want is more than the other person would ever be willing to give.

Pages 189 – 192: T/A here discuss the vicious feedback loop nature of self-justification, and point out an important concept at the end: the schema bottleneck in other words.

T/A note that “pain felt is always more intense than pain inflicted, which explains why “the remarkable thing about self-justification is that it allows us to shift from [victim to perpetrator] without applying what we have learned from one role to the other.”  

Salience, and also possibly  local vs. global optimization.

Pages 194 – 196: More fundamental attribution error here, as well as schema bottleneck.  T/A note that in the instances when perpetrators can’t deny their actions anymore, they view it as an “isolated incident” and sweep it under the rug.  To them, it’s in the past; to their victims, not so much… they don’t have the capacity to understand how the other side feels.

Often, a lot of this has also been hidden “under the bed.”  The flip side of letting things slide…

Pages 198 – 199: T/A utilize cognitive dissonance theory to explain Abu Ghraib-like behavior, and to point out that self-justification is actually stronger for those with higher self-esteem.

Page 201: Here’s the “ Ordinary Men” phenomenon: T/A note that one of the most “thoroughly documented” findings in psychology is that brutality is usually “committed by ordinary individuals” rather than sadists or psychopaths.  It’s hard to accept because we want to think of them as “others.”  On  contrast bias and  fundamental attribution error.  OrdM review + notes

Pages 209 – 210: Unilateral forgiveness is non-awesome (particularly if it’s on the victim’s side, because it just leads to continued unfair/callous behavior by the perpetrator).  Unsurprisingly, both sides agreeing to focus on moving forward is the best solution… and one that’s nontrivial.

Pages 214 – 215, Page 217: Want to make an impact?  Say “you were right, and I was wrong.”  

This is, by the way, Munger-approved.  And Dale Carnegie approved. see “ How To Win Friends and Influence People – HWFIP review + notes).

T/A point out (in other words) that this is somewhat of a local vs. global optimization problem: at any given moment, it’s easier to continue on the bad path than bite the bullet and switch over to the good one, but cumulatively, we’d save ourselves a lot of trouble if we just admitted our mistakes…

Page 219: How powerful is an admission of guilt and an apology?  T/A note that studies of hospitals find that admissions of guilt and implementation of preventative measures make patients less likely to sue.  On  empathy.

Pages 221 – 222: Again, they don’t explicitly mention the growth mindset, but pretty much back it up.

Pages 225 – 226: The conclusion of their “pyramid” model is that we need to be more mindful of states of dissonance and not take the first step down the pyramid.

Page 227: a nice practical example of the right sort of approach.  See also Tetlock.

Pages 231 – 234: finally they bring up the growth mindset – also, some useful advice on how to talk people down from the ledge if they’ve, for example, been scammed.

Page 242: Footnote 12 regarding people’s perceptions of things they own vs. the things they don’t: see also the endowment effect and loss aversion in Thaler’s Misbehaving ( M review + notes).

Page 248: More on vaccines.  See Dr. Paul Offit’s “ Deadly Choices ( VAX review + notes.

Page 256: A reference to CBT – cognitive behavioral therapy.  See Dr. Judith Beck’s “ Cognitive Behavior Therapy ( CBT review + notes).

Page 257: perhaps some interesting book recommendations here.

 

First Read: spring 2018

Last Read: spring 2018

Number of Times Read: 1

Planning to Read Again?: yes

 

Review Date: spring 2018

Notes Date: spring 2018