Jerome Groopman’s “How Doctors Think”: Book Review, Notes + Analysis

Poor Ash’s Almanack > Book Reviews > Effective Thinking > cognitive biases

"How Doctors Think" by Jerome GroopmanOverall Rating: ★★★★★★★ (7/7) life-changing)

Learning Potential / Utility: ★★★★★★★ (7/7)

Readability: ★★★★★★★ (7/7)

Challenge Level: 2/5 (Easy) | ~280 pages ex-notes (319 official)

Blurb/Description: Dr. Jerome Groopman asks and answers a critical question: how do cognitive biases lead doctors to make the wrong diagnoses, and how do they overcome those biases to make the right ones?

Summary: Atul Gawande’s “ The Checklist Manifesto” TCM review + notes) is an excellent book in its own right, but it’s beyond strange to me that Checklist gets recommended routinely by investors and yet I never heard anyone say a thing about How Doctors Think.

From my perspective, for investors and businesspeople (not to mention anyone who will eventually seek medical care for something – i.e. all of us), How Doctors Think is a much more broadly applicable book, because it addresses errors of judgment (which are harder to get rid of) rather than technical errors (which can be easily dealt with via forcing functions like checklists).

How Doctors Think is not only extremely well-written and engaging, but also offers extensive depth and breadth without wasting pages or diving into trivial/irrelevant details to do so.  It spans the gamut of cognitive biases and provides some simple, concrete recommendations for effectively dealing with them from the perspective of both patient and doctor.

Highlights: Groopman (pictured at right) acts more as curator than lecturer, letting the experiences of numerous doctors in numerous fields (along with some of his own) display the range of cognitive biases that can interfere with appropriate diagnoses, and providing a number of thoughtful practitioners in numerous areas of specialty a platform to share their own methods for getting it right.  He touches on a wide array of important topics (such as the role of data, statistics, and empirics in decision-making) that extend well beyond cognitive biases.

Lowlights: None, really.

Mental Model / ART Thinking Points:  empathymulticausalityBayesian reasoninginversion,framingselection biasdecision journalingcognition vs. intuitionliking biasdesire bias, authority bias, vividnesssaliencerecencyoverconfidenceprobabilistic thinking / counterfactualspriors /conditional probabilitiesstructural problem solvingsleep / rest, incentivessocial proofman-with-a-hammeraction biasopportunity costfeedbackn-order impactshyperbolic discountingloss aversion,  disaggregationcognition vs. intuition

You should buy a copy of How Doctors Think if: you want a highly practical view of the real-world consequences of cognitive biases in a novel context (medicine).

Reading Tips: None; this is a highly readable book that requires no special instructions.

Pairs Well With:

“ The Checklist Manifesto by Atul Gawande (TCM review + notes) – an excellent book on how doctors can use a simple structural problem-solving tool (a checklist) to eliminate or greatly reduce technical errors.  Although Groopman attributes more errors to misdiagnosis than technical errors, that doesn’t mean we shouldn’t try to combat both!

 Misbehaving by Richard Thaler (M review + notes) – a witty and page-turning yet highly educational journey through behavioral economics by one of its founders.  The best theoretical underpinning on the sorts of cognitive biases discussed here.

 The Pleasure of Finding Things Out by Richard Feynman (PFTO review + notes) – witty anecdotes and discussions from a Nobel-prize nuclear physicist, also extensively discussing the importance of acknowledging doubt and uncertainty.

The Signal and the Noise by Nate Silver ( SigN review + notes a very thoughtful book about how to integrate data with human insights; provides the flip side of Groopman’s general skepticism toward Bayesian reasoning.

David Oshinsky’s “ Bellevue” – BV review + notes – a historical view on how doctors thought… correctly or incorrectly… over three centuries of medicine.

Reread Value: 5/5 (Extreme)

More Detailed Notes (SPOILERS BELOW):

IMPORTANT: the below commentary DOES NOT SUBSTITUTE for READING THE BOOK.  Full stop. This commentary is NOT a comprehensive summary of the lessons of the book, or intended to be comprehensive.  It was primarily created for my own personal reference.

Much of the below will be utterly incomprehensible if you have not read the book, or if you do not have the book on hand to reference.  Even if it was comprehensive, you would be depriving yourself of the vast majority of the learning opportunity by only reading the “Cliff Notes.”  Do so at your own peril.

I provide these notes and analysis for five use cases.  First, they may help you decide which books you should put on your shelf, based on a quick review of some of the ideas discussed.  

Second, as I discuss in the memory mental model, time-delayed re-encoding strengthens memory, and notes can also serve as a “cue” to enhance recall.  However, taking notes is a time consuming process that many busy students and professionals opt out of, so hopefully these notes can serve as a starting point to which you can append your own thoughts, marginalia, insights, etc.

Third, perhaps most importantly of all, I contextualize authors’ points with points from other books that either serve to strengthen, or weaken, the arguments made.  I also point out how specific examples tie in to specific mental models, which you are encouraged to read, thereby enriching your understanding and accelerating your learning.  Combining two and three, I recommend that you read these notes while the book’s still fresh in your mind – after a few days, perhaps.

Fourth, they will hopefully serve as a “discovery mechanism” for further related reading.

Fifth and finally, they will hopefully serve as an index for you to return to at a future point in time, to identify sections of the book worth rereading to help you better address current challenges and opportunities in your life – or to reinterpret and reimagine elements of the book in a light you didn’t see previously because you weren’t familiar with all the other models or books discussed in the third use case.

Page 3: Groopman opens the book with a great example of empathy as it relates to listening.

Page 4: Groopman discusses nostalgia / affect bias.

Pages 5-6: Groopman discusses the pitfalls of decision trees and algorithms: while they have their uses, they:

“quickly fall apart when a doctor needs to think outside their boxes, when symptoms are vague, or multiple and confusing, or when test results are inexact.  In such cases […] algorithms discourage physicians from thinking independently and creatively.  Instead of expanding a doctor’s thinking, they can constrain it.” 

In other words, multicausality and  probabilistic thinking as well as the  dose-dependency of disaggregation and  structural problem solving.

He goes on to discuss evidence-based medicine, and notes that:

“statistics cannot substitute for the human being before you; statistics embody averages, not individuals.” 

 sample size issue; see also Achor on outliers in “ The Happiness Advantage” THA review + notes) or Ellenberg in “ How Not To Be Wrong ( HNW review + notes).

Algorithms are also unhelpful when there’s a problem with no precedent.

Page 8: by asking the right questions, patients can help doctors reach better conclusions

Page 9: Groopman believes, on  cognition vs. intuition, that:

“cogent medical judgments meld first impressions – gestalt – with deliberate analysis.”

Pages 11 – 12: Groopman notes that few doctors actually literally use Bayesian reasoning.  Which makes sense, because it’s pretty hard to quantitatively decide on priors (to Silver’s chagrin…)

Pages 12 – 14: empathic listening in action again

Page 15: inversion (ruling out possibilities)

Pages 17 – 18: on the importance of asking open-ended questions.   Framing.

Pages 19 – 21: selection bias with sick patients (see also page 91 of Atul Gawande’s The Checklist Manifesto) or David Oshinsky’s “ Bellevue” – BV review + notes – selection bias was a recurring theme over much of Bellevue’s history.  Oshinsky notes reviewing the data in 1900, one city health official noted that other hospitals were

“sending the poor, dying patient to Bellevue in order to lessen their [own] death rates.”  

Good doctors have to be “the whole package” – good at both the analytical/medical side, and at communicating.  He also notes, via the extended story about the misdiagnosed celiac patient, the importance of seeing the whole picture intuitively.

Also an important note about the importance of admitting mistakes to yourself and analyzing them so you can learn from them – i.e. decision journaling.

Page 22: framing again – how a doctor is presented with information meaningfully impacts how they perceive and interpret that information.

Page 24: where this book differs from Gawande’s “ The Checklist Manifesto” TCM review + notes), making the two complementary, is that Groopman is not focused on technical errors like “prescribing the wrong dose of a drug.”  Instead, Groopman is digging into the issue of misdiagnosis; he notes that:

“experts […] have recently concluded that the majority of errors are due to flaws in physician thinking, not technical mistakes.”  

This isn’t a problem of lack of knowledge:

“inadequate medical knowledge was the reason for error in only four [of a hundred] instances […] rather, [doctors] missed diagnoses because they fell into cognitive traps.”

For various reasons, but primarily this one, I found this book to be far more applicable to investing and business than The Checklist Manifesto – none of my investment mistakes, and very few of those of the people I know, have been due to equivalent “technical errors” like entering the wrong number in a spreadsheet or forgetting to account for how leveraged a company is.  

Rather, all or most of the relevant information is usually collected and analyzed, but then the judgment made thereupon is flawed (resulting in a bad outcome).  A checklist couldn’t have prevented any of the mistakes I’ve ever made as an investor, but a better thought process could have (and, subsequently, has.)

Both books are worth reading, but Groopman’s is thus far more valuable for my purposes.

Page 25: a great example of lollapalooza effects: liking bias combined with commitment bias combined with framing

Page 32: on the differences between theory and practice:

“Straight A’s when I was a student, play-acting.  Now, in the real world, I gave myself an F.”

Pages 34 – 35: on heuristics and pattern-recognition: despite the extensive analytical/deductive process taught in school, in the real world, doctors tend to make decisions based on rapid perception.   cognition vs. intuition / emotion

Page 36: as in many other fields, school teaches that doctors are econs, but in the real world they’re humans.  Groopman in conversation with another doctor:

Most people assume that medical decision-making is an objective and rational process, free from the intrusion of emotion […] yet the opposite is true.”

Yay  humans vs. econs and  cognition vs. emotion.

Page 39: liking bias again, and also the idea of cognitive arousal

Page 40: another useful summary of his thesis about technical errors vs. errors in thinking, as earlier discussed on page 24

Pages 42 – 46: on framing and (dis) liking bias: Groopman discusses extensively how stereotypes of patients can box doctors into a certain diagnosis, or, alternatively, prevent them from reaching the right diagnosis.  The solution is to push for alternative hypotheses and when possible, test for them.

Pages 47 – 53: desire bias and liking bias is discussed here using the example of another doctor, as well as Groopman, who liked a patient and didn’t want to put him through discomfort, and therefore failed to lift him up for a full-body inspection that would have prevented the development of a dangerous abscess.

Here I will quibble with Groopman.  Although his premise is that misdiagnosis rather than technical errors are the bigger problem, I think the more realistic assessment is that both are a problem.  In this case, Groopman discusses how he failed to notice (with potentially catastrophic results) an abscess on a patient of his because he decided to cut short a normally head-to-toe physical examination.  I would classify this as a technical error in some sense because he “normally […] had a system” but chose not to follow it: this could have been solved by a checklist, to the extent that the point of one is to not allow you to skip any critical steps that could lead to critical errors.

Pages 55 – 57: another framing error (someone looked like a homeless hippie).  Additionally, it’s difficult to eliminate biases one-by-one, because each experience is different – so the idea here is you have to systematically (procedurally) reduce their impact.

Page 62: authority bias: when a specialist says something, you take it seriously, even if maybe you shouldn’t

Page 64 – 65: availability/representativeness heuristic, as well as confirmation bias and anchoring (I would add recency bias).  Groopman here references Daniel Kahneman and Amos Tversky.  See also, of course, “ Misbehaving by Richard Thaler (M review + notes).

Page 66: overconfidence is a real problem.  The solution?   Probabilistic thinking / counterfactuals.  One of the doctors Groopman was studying recommended:

“even when I think I have the answer, to generate a short list of alternatives.”  

Helpful in a business context, too – see Phil Rosenzweig’s “ The Halo Effect ( Halo review + notes), which covers the idea of counterfactuals, or Philip Tetlock’s “ Superforecasting” ( SF review + notes) – which does too, to a lesser extent.

Pages 68 – 69: while it’s not explicitly mentioned here, I’m gonna do a little synthesis and read between the lines while tying in concepts from elsewhere/later in the book.  Groopman notes that one doctor

“emphasizes to his interns and residents in the ER that they should not order a test unless they know how that test performs in a patient with the condition they assume he has.  That way, they can properly weight the result in their assessment.”  

A good example of priors / conditional probabilities here, despite Groopman’s reticence about Bayesian reasoning.

Groopman goes on to discuss the problem of having to make decisions under time pressure faced with an inordinate amount of information, and elsewhere in the book (in the section about radiologists, for example), he notes that more data isn’t always better: sometimes it can actually decrease your effectiveness because it adds noise.  

So, despite Groopman’s sort of negative attitude toward Bayesian reasoning, I think it’s actually perhaps more useful on a qualitative, meta-thinking level: when deciding whether or not to obtain and evaluate a potential incremental piece of data, ask yourself whether it would actually help you assess the problem or not – because otherwise you’re not only just wasting time, but also risking screwing up the rest of your decision process.

I think this is a problem that a lot of businesspeople and investors commonly run into: trying to solve the problem by throwing more data at it rather than distilling and deeply thinking about the few key things that matter.

Page 74: on the importance of making good decisions vs. making impressive decisions: the previously-referenced doctor, Alter, notes that some doctors: 

“like the image that we can handle whatever comes our way without having to think too hard about it – it’s kind of a cowboy thing.”  

Overconfidence much?  And  cognition vs. intuition. But Groopman notes that slowing down and focusing on whether or not you’re thinking about things the right way is often the most useful way to weed out cognitive errors.  This is pretty common advice in the cognitive bias literature; it comes up again later.

Pages 75 – 76: useful information for anyone seeing a doctor: ask the doctor “what’s the worst thing this can be?”  Also ask what other things could cause this, what other body parts are in the area, etc.  

I applied these when I had to take a friend of mine to a doctor for an unknown stomach ailment; it turned out to be nothing but they were useful in getting the doctor to consider and evaluate a few other possibilities.

Pages 79 – 82: one pediatrician, Victoria McEvoy, notes that she tries to prepare herself mentally for evaluating her patients, but it’s also critical to not overschedule herself.  Groopman shares his own experience:

“McEvoy’s story of relentless work and sleep deprivation reminded me of the worst moments of my own internship and residency […] subconsciously, I found myself minimizing the severity of a symptom or assuming that an aberrant laboratory result was an artifact rather than a sign of a serious problem.”

culture that worships superhuman effort is the problem; structural problem-solving is the answer.  McEvoy discusses this more on pages 81 – 82.  See also  sleep / rest.

Also cross-reference here Dr. Matthew Walker’s phenomenal “ Why We Sleep” ( Sleep review + notes).  Walker mentions, on the consequences of medicine’s Batman-has-no-limits cowboy attitude toward sleep:

“Residents working a thirty-hour-straight shift will commit 36 percent more serious medical errors [… and make…] 460 percent more diagnostic mistakes in the intensive care unit than when well rested.

Through the course of their residency, one in five medical residents will make a sleepless-related medical error that causes significant, liable harm to a patient.  One in twenty residents will kill a patient due to a lack of sleep.”

Page 84 – 85: sort of tangential, but nice discussion of the continuum of human behavior

Page 87: a good doctor is one who takes care of you like “he is the only one in your practice”

Page 89 – 90: here is an example of what I was referencing earlier with regard to what I’ll call meta-Bayesian thinking: Dr. JudyAnn Bigby provides some helpful thoughts on why she uses that sort of thinking to decide whether or not certain tests will add value.

For example, she doesn’t give a nearly-90-year-old woman with a history of clean mammograms another one because even if she had something show up, it would take so long to develop that something else would kill her first.  Therefore, from a practical perspective, it’s not something worth evaluating.

It reminds me of certain investing/business problems where people worry about all sorts of irrelevant factors.  For example, one that’s beyond idiotic is investors hand-wringing over management’s past history of repurchasing shares at a higher price than they trade at today: um, okay, so what?  If the stock’s undervalued today, and management buys back shares, no problem.  If the stock’s overvalued today, you shouldn’t be buying it anyway.  This only becomes a problem once shares are overvalued, so if you think the stock is cheap today, that’s not a reason not to buy it.  

Except to the extent that that past decision has some cross-read to other capital allocation (and it usually doesn’t – lots of managers wouldn’t do stupid M&A but, for some reason, do stupid share repurchases), then it’s not a relevant analytical factor.

Also, incentives, and structural problem-solving.

Page 92: really interesting story about a nontraditional reason for patient noncompliance… question your assumptions?

Page 95: on schema bottlenecks and empathy: two different doctors emphasized to Groopman that:

a physician must not lose sight of the fact that what may seem mundane to the doctor can strike the patient as tragic.”

Page 96: in reverse, patients’ preconceived notions and stereotypes/biases can affect doctors as well.

Page 97: on efficiency being useless if you have the wrong framework, i.e. busyness vs. productivity.  As Covey might say in “ The 7 Habits of Highly Effective People” ( 7H review + notes), “wrong jungle!”

Page 98: on man-with-a-hammer syndrome, as well as the difference between technical skill andrationality: Groopman quotes Dr. Eric Cassell’s Doctoring: The Nature of Primary Care Medicine:

“One should not confuse highly technical, even complicated, medical knowledge – […] with the complex, many-sided worldly-wise knowledge we expect of the best physicians.”

And:

Specialists take care of difficult diseases, so, of course, they will naturally do a good job on simple diseases.  Wrong.  […] people used to doing complicated things usually do complicated things in simple situations – for example, ordering tests or x-rays when waiting a few days might suffice – thus overtreating people with simple illnesses and overlooking the clues about other problems that might have brought the patient to the doctor.”

Also, earlier:

“Knowing when you don’t know requires sophisticated knowledge.”

Page 99: a hint/foreshadowing of the later discussion of radiology: Groopman notes that (seeminglyuseful and well-intentioned) patient templates can have the unintended consequence of disrupting the traditional doctor-patient engagement:

“Electronic technology can help organize vast clinical information and make it more accessible, but it can also drive a wedge between doctor and patient […] it also risks more cognitive errors, because the doctor’s mind is set on filling in the blanks on the template.  He is less likely to engage in open-ended questioning, and may be deterred from focusing on data that do not fit on the template.”

This is, in some senses, a  selective perception issue as well as dose-dependency of  structural problem solving: as Laurence Gonzales says in “ Deep Survival ( DpSv review + notes),

“Gorillas are not helpful in completing the task [of counting the number of passes.]  … Gorillas are irrelevant and would displace the task in working memory. So the brain, efficient system that it is, filters out the gorilla so that you can keep counting.  Seeing the gorilla would be a mistake. You’d lose count.”

I’ve had analogous experiences while interviewing CEOs and CFOs about their businesses: obviously, I always like to go in prepared with a list of specific questions about important issues, but I also find it useful to be able to “spitball” and go off script (sometimes, way off-script) if management says something interesting.  I’ve found that if I stick too closely to “interrogation” and trying to fill in answers to my specific questions (i.e. the doctor filling in the template), I often end up with a less useful total result than if I mix the two.

Page 108, Page 111: while this book is more focused on cognitive stuff than medicine per se, I did think page 108 (and page 111 on ECMO)  was very interesting from a science perspective – explains in simple language some interesting stuff about oxygenation of blood

Page 115: also some interesting science about SCID

Page 117: a practical example of what Groopman discusses in terms of how patients can help doctors: she was calm, collected, and informed, but she made sure the doctors didn’t zero in on a convenient diagnosis

Page 121 – 122: more of the above, and also a nice use of incentives:

“If Shira is an atypical case,” Rachel said, her tone softening, “then an ambitious scientist might be able to publish a paper on her.”

I feel like Munger and Carnegie – “ How To Win Friends and Influence People (HWFIP review + notes) have something to say about this.

Pages 126-128: back to stereotyping and confirmation bias: Groopman discusses how the generally logical “if it walks like a duck” and “when you hear hoofbeats, think zebras, not horses” can also blind doctors to corner cases.  There are other psychological factors at play here, such as the necessity to deal with uncertainty.

On the second-to-last paragraph of page 127, Groopman mentions that a doctor facing an unusual/arcane case may “lack the courage of his convictions” due to lack of experience – my interpretation here is that this may perhaps provide some of the basis for attempting to fit it to existing stereotypes or being a man with a hammer: at least you know how to deal with those.

Summarily, though, the challenge is that once a reasonable diagnosis is in place, a number of factors (raising from social proof to confirmation bias to framing) can perpetuate that diagnosis forward, even if it’s wrong.

Take this as an example of framing: Groopman notes that the opening statement every morning was “Shira Stein, a Vietnamese infant girl with an immune deficiency disorder consistent with SCID…”  She didn’t actually have SCID, but it’s sort of treated like a foregone fact.

Elsewhere, Groopman talks about “search satisficing.”  We’ll come back to it.

Page 133: nothing super topical, but an amusing anecdote about the head cardiologist at Boston’s Children’s Hospital, Dr. James Lock, who was suspended from school in second grade and nearly expelled in the sixth.  That second time, a psychiatrist rescued me by suggesting that I be advanced into eighth grade.  

Lock ended up becoming a National Merit Scholar and started college at 15 (not quite Doogie Howser, but close.)  Groopman and Lock joked, semi-seriously, that today Lock would’ve been dosed up on Ritalin for a bad case of ADHD.

It’s funny because I’m pretty sure the same thing would’ve happened to me in public school…

Pages 134-135: setting aside my liking bias for Lock, objectively this is a pretty sweet quote from him:

Epistemology, the nature of knowing, is key in my field.  What we know is based on only a modest level of understanding.  If you carry that truth around with you, you are instantaneously ready to challenge what you think you know the minute you see anything that suggests it might not be right.”

Sound familiar?  It should.   Overconfidence vs intellectual humility, and  probabilistic thinking.  Compare that bit to our favorite nuclear physicist, Richard Feynman, on page 24 of The Pleasure of Finding Things Out:

“I can live with doubt and uncertainty and not knowing.  I think it’s much more interesting to live not knowing than to have answers which might be wrong.  I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean.  […] I don’t feel frightened by not knowing things […] it doesn’t frighten me.””

Later, on page 104:

“The question of doubt and uncertainty is what is necessary to begin; for if you already know the answer, there is no need to gather any evidence about it […] [it is] very vital to put together ideas to try to enforce a logical consistency among the various things that you know.”  

Pages 136 – 138: so, the whole book is obviously A+ but this part particularly fascinated me (as did the radiology part, still to come).  on  status quo bias: I’m not going to block quote two pages worth of text, but the short version is that for the better part of a century, common medical practice has been to insert a certain needle to drain fluid from around the heart in a specific spot – a spot which was picked because it was easy to penetrate (reminds me of the anecdote of the guy looking for his keys under the lamppost because “that’s where the light is,” notwithstanding that he dropped his keys in the bushes!)

This was passed down from doctor to doctor because that was the way it has always been done… and nobody ever bothered to question it or put any empirics behind it or whatever.  Lock eventually figured this out and his trainees (and hopefully the whole field) instead stick the needle where the fluid actually is, as determined by ultrasound.

Again back to Feynman – on pages 184 – 185 of The Pleasure of Finding Things Out, he discusses that the one “disease” of humans’ unique ability to pass along ideas through generations is that sometimes bad ideas can get stuck…

I guess the lesson here is: don’t be afraid to ask why.  There is another similar (but less interesting) anecdote on page 140, which also touches on bioethics and the problem of human studies.  (See Groopman’s article on BPA for more on this topic.)

It really is astonishing how long bad ideas can persist via  culture / status quo bias.  See pages 58 – 59 of Nudge ( Ndge review + notes), where Sunstein/Thaler observe:

S/T cite some (admittedly very old, 1930s) research that found a few intriguing things.  Individuals made judgments on the distance of a point of light in a dark room; groups led to conformity.  A “plant” by the researchers could meaningfully move the group’s opinion in some arbitrary direction, if the plant was confident enough about it.

So far, not surprising – but S/T note something so insightful that it is worth quoting at length:

“Initial judgments were also found to have effects across “generations.”  Even when enough fresh subjects were introduced and others retired so that all participants were new to the situation, the original group judgment tended to stick…

[… other experiments have shown that] an arbitrary “tradition” […] can become entrenched over time, so that many people follow it notwithstanding its original arbitrariness.

[…] An important problem here is [… that] we may follow a practice or tradition not because we like it, or even think it defensible, but merely because we think that most other people like it.  

Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.”

This is really fascinating – and it’s not the only place I’ve seen this.  Here’s a chilling bit from Christopher Browning’s chilling “ Ordinary Men” OrdM review + notes), discussing a similar persistence of group behavior during the Holocaust:

“Because of a the high rate of turnover and reassignment, only a portion of the policemen who had taken part in the first massacre at Jozefow were still with the battalion in November 1943, when its participation in the Final Solution culminated in […] the single largest German killing operation against Jews in the entire war[,] with a victim total of 42,000 Jews.”

Page 142: on whether genius surgeons are born or taught.  The argument comes out to “nurture” in this case.

Page 146: here’s one of the practical punchlines on how to avoid cognitive biases: rather than trying to deal with each one individually, Lock simply starts by ignoring anyone else’s opinions (diagnosis) and “I look at the primary data.”  He also focuses on:

“being able to see the entire picture at once, integrating each component into a coherent whole.  And when one piece does not fit, he seizes on it as the key to unlock the mystery.”  

So, basically, back to Feynman: doubt and look for disconfirming evidence (recall the Feynman discussion of the rat study and the challenges with social-sciences research not having adequate controls.  Another example could be the Hawthorne Effect referenced briefly by Gawande in The Checklist Manifesto.

Pages 148 – 149: While Groopman seems to take a generally skeptical tone toward “evidence-based” or statistical medicine earlier, here he defends it a bit: more or less, Lock discusses a few instances in which “impeccable logic” failed to reach the right conclusions.  

Peter Thiel would be sad:

“My mistake was that I reasoned from first principles when there was no prior experience.  I turned out to be wrong because there are variables that you can’t factor in until you actually do it.”  

Lock goes on to point out that the complexity of human biology means that you can’t predict everything; he doesn’t explain this explicitly but here you could dive into n-order impactsfeedbackeffects, etcetera.

Lock’s ultimate conclusion is, again, to avoid overconfidence and focus on being more open to disconfirming evidence.

Page 150: just a nice emphasis of already-discussed concepts: specialists are overconfident because they have so much knowledge that they think it protects them from making mistakes and enables them to make perfect decisions about inherently unpredictable systems.

It doesn’t.

Page 151: Groopman cites work by Donald Schon at MIT criticizing a Bayesian approach in clinical settings.  

Compare/contrast with Nate Silver’s discussion of Bayesian reasoningthroughout The Signal and the Noise ( SigN review + notes).

See the  Bayesian reasoning model for my key takeaways here, but I generally find the following quote from Philip Tetlock’s ‘ Superforecasting’ (SF review + notes) to best approximate the most useful point of view on this issue:

“[Despite being numerate, superforecasters] rarely crunch the numbers so explicitly. 

What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.”

Page 152 – 154: extensive discussion of decision-making under uncertainty as well as “the culture of conformity and orthodoxy that begins in medical school.” Man-with-a-hammer and authority bias on page 154, as well as social proof.

Pages 160 – 161: this chapter, Surgery and Satisfaction, about Groopman’s own challenges getting his hand fixed, is somewhat less notes-dense but actually very interesting (just at a higher, less detailed level).  Highlights include a doctor literally making up a condition that didn’t exist (!), but also a much more thoughtful doctor, Terry Light noting “MRI […] can show us way too much.”  

Page 167 continues Dr. Light’s thoughts:

“MRIs […] find abnormalities in everybody.  More often than not, I am stuck trying to figure out whether the MRI abnormality is responsible for the pain.  That is the really hard part.”

Multicausality and  probabilistic thinking.  Going back to my previous reference to The Signal and The Noise, this is a perfect example of the challenges of signal and noise.  Later, in the radiology chapter (“The Eye of the Beholder’), Groopman discusses the challenge of more data not always being better data – we’ll get there momentarily.

Here, Light is referencing the challenge of false positives: while he (obviously) thinks MRIs are wonderful and useful, the truth is that natural variation combined with aging means that if you look hard enough, you’ll find something wonky about everyone – but that doesn’t mean it’s necessarily something wrong that needs treatment or is responsible for a problem.

The reason I like this book so much (and view it as more interesting/applicable to what I do than The Checklist Manifesto, which gets all the airtime and praise) is that I feel like I’m in the same position as many of the doctors Groopman describes.  Like every patient, every company is different, and there’s not always easy precedent or a large database of “ base rates” (statistical effectiveness of treatments, in the medical analogy) to fall back on.

And if I can do that annoying thing where I dichotomize the world into two kinds of investors, there’s one bucket that more or less tries to optimize for information gathering and analysis: more customer calls, a bigger model, a longer research document… of course, some of this work is necessary/valuable/important and you can’t just go around making snap judgments and calling everything that makes a quack-ish sound a duck (Groopman points out the flaws in this, many times.)  

But at the same time, there comes a point where adding more data (MRIs, channel checks) is not necessarily going to lead to a better outcome for the patient / LP: there are tons of potential data points I could agonize over, but I find the Lock-esque “seeing the big picture” approach as the one that ultimately results in better outcomes.

Page 168: on a different topic, back to whether great surgeons are made or born: it turns out that, at least in the worldview of Dr. Terry Light, the decision-making and conceptualization about a patient’s problem is more important than the actual technique.  This makes sense, of course, and calls to mind Covey’s quip in 7 Habits about “the wrong jungle” – you can be the most skillful lumberjack in the world, but if you’re cutting down the wrong kinda tree, well…

Pages 169 – 170: Dr. Light’s mentor, Dr. Linda Lewis, offered the following pithy advice: “don’t just do something, stand there!” Action bias.

Also, here’s where Groopman references “search satisficing” – once you’ve found a good answer, your brain kinda shuts off.

Page 171: Dr. Karen Delgado, an endocrinology specialist, uses the following simple question to help herself find the correct diagnosis:

“what else could this be?”  

There’s also, of course, the possibility that there could be more than one diagnosis necessary (particularly applicable in investing, where it’s often not as simple as “good company” or “bad company.”)  Again,  probabilistic thinking and  multicausality.

Pages 173 – 174: there’s a really interesting discussion here (and also, elsewhere) about “the perfect is the enemy of the good” – it’s continued throughout the book and might be a nice segue into Gawande’s Being Mortal (which I never finished reading, but intend to.)

Also, older / more mature doctors seem to get more joy out of the patient getting a good outcome than completing a technically proficient surgery.  (Doctor Strange is probably not in this camp.)

Here and elsewhere, Groopman discusses that good medicine should be patient-centered, just as Don Norman would state that good design should be human-centered.  Medicine is not its own end.

Page 179: finally, the radiology chapter.  My first note is that framing an analysis as part of a routine physical led 60% of trained radiologists to not notice a missing clavicle (?!) in a chest x-ray – similar to the famous gorilla experiment.  The lesson being that you only see what you’re looking for.  Schema / selective perception.

Pages 180 – 181: this is the first part of the “more data, more problems” phenomenon I was talking about: after thirty-eight seconds, “many radiologists begin to see things that are not there.”  See also marginal utility.

Dose-dependency too, of course.  This is a good example of what Groopman discusses about the balance between  cognition and  intuition.

Groopman goes on to discuss variations between radiologists’ own assessments at different times, and of course different radiologists’ assessments of the same film.

Page 182 – 183, Page 185: One radiologist seeks to avoid errors by using, guess what, a checklist!  Whether or not they appear to be relevant, he goes through everything major.  Gawande would be proud.   Structural problem solving.

Page 188: quick example of “fighting the last war” – recency bias or  availability, in other words – a radiologist who missed a breast cancer and ended up being sued subsequently vastly overestimated the need for biopsies in future x-rays.

This is actually a common deal with investors: you can see it pretty easily from a macro level (everyone worrying about the last crisis), but perhaps more importantly, on a micro level.  A couple of my mentors have mentioned the degree to which bad experiences in certain areas can permanently wall off potential investment opportunities because of historical biases.  This is obviously in part a rational response – if you keep doing the same thing and get bad results, either it’s a bad game or you’re bad at playing it – but at the same time, sometimes it takes completely fresh eyes to make the most perceptive and accurate judgment.

So there’s a trade-off ( opportunity cost) to experience.  It’s mostly good, but not perfectly or automatically so.  And sometimes, despite the caution from Lock earlier, you do have to reason from first principles.

Pages 191 – 193: “there is just so much data,”and the problems associated with that.  And remember, this book is now a decade old.  So, a relevant problem for all of us…

Pages 197 – 198: some of the empirics behind “search satisfaction” using eye tracking of radiologists.

Pages 198 – 199: now, keep in mind that this book is at this point a decade old, and perhaps computers have come a long way since then: but it’s worth noting here that while computer-assisted detection did improve detection in some cases, it also led to a lot more false positives and in some cases “shaking the confidence of a specialist in his initial diagnosis.”

 N-order impacts. This is often how I feel with incremental data: while I’m not opposed to the idea of better decisions through data, I also think you have to be careful about it, for reasons more thoroughly discussed in Nate Silver’s The Signal and the Noise.

Pages 206 – 207: a pushy testosterone salesman uses every strategy in the Cialdini book.  (“Influence: The Psychology of Persuasion”)

I thought Groopman’s discussion of  reciprocity bias was particularly strong here.  As for the  overconfidence of doctors who didn’t think the lavish trips affected their judgment, it calls to mind the bit from Peter Thiel’s Zero to One ( Z21 review + notes) where he talks about advertising and double deception:

“In Silicon Valley, nerds are skeptical of advertising, marketing, and sales because they seem superficial and irrational.  

But advertising matters because it works. It works on nerds, and it works on you.  You may think that you’re an exception; that your preferences are authentic, and advertising only works on other people.

[…] but advertising doesn’t exist to make you buy a product right away; it exists to embed subtle impressions that will drive sales later.  

Anyone who can’t acknowledge its likely effect on himself is doubly deceived.” 

Pages 208 – 210: continuing with the subthread throughout the book of over-medicalization of non-medical issues, Groopman discusses hormone therapy as well as anxiety and so on.  my gym has ads for testosterone that are mainly aimed at middle-aged men, but you’d be astonished how many of my friends worry about their testosterone levels and go out of their way to boost them (naturally).  I’m in the camp of the problem with most guys being too much testosterone rather than too little but then I’ve always been the odd one…

Pages 212 – 214: desire bias; a really powerful bit because by this point we all know Groopman is smart and thoughtful, yet not too smart and thoughtful to occasionally be overwhelmed by something he desperately wants to be true

Page 215: on incentives and feedback effects:

“When researchers have rigorous, groundbreaking data to announce, they try to publish in one of the top-tier journals; by the same token, these journals seek out epochal reports to add to their luster.”

See also Jordan Ellenberg in “ How Not To Be Wrong ( HNW review + notes) on the replication problem.  Replication ain’t sexy.

Pages 217 – 218: a good example of inversion: does it make sense that every woman in the world should be on the same medication?  That nature could be so wrong about appropriate hormone levels?

Pages 219 – 220: an uncommonly honest pharma executive, but also another example of physicians sticking with what they know (remember the 17-year thing earlier)

Pages 223 – 226, Page 228: on incentives

Pages 231 – 232: n-order impacts: ethics disclosures can actually make things worse

Pages 237 – 238: I really like this bit about classification systems and their pitfalls.  And in some senses reminds me of “name of a bird” from Feynman

Page 239: ego

Page 242: framing

Page 246: hyperbolic discounting.  loss aversion too.  Really this whole book is a treasure trove of mental models.

Dr. Stephen Nimer:

“Most of the patients I have encountered who refused treatment do so because they are so focused on the downside […] they are only thinking about what’s happening to them that day.” 

Also, Lock on the human preference for certainty: we “instinctively latch on to certainty” when faced with uncertainty.

Pages 249, 252, 254, 255, 259: on medicine for the sake of medicine vs. quality of life.  Really powerful stuff that’s less about detailed note-taking and more about big-picture thinking.

 

First Read: summer 2017

Last Read: February 2018

Number of Times Read: 2

 

Review Date: February 2018

Notes Date: February 2018