Henry Petroski’s “To Engineer is Human”: Book Review, Notes, and Analysis

Poor Ash’s Almanack > Book Reviews > Science + Engineering + Math > engineering

Overall Rating: ★★★★★★ (6/7) (standout for its category)

Learning Potential / Utility: ★★★★★★ (6/7)

Readability: ★★★★★★ (6/7)

Challenge Level: 1/5 (None) | ~227 Pagecount ex-notes (272 official)

Blurb/Description: Civil engineering professor Henry Petroski cogently and thoughtfully explains important engineering concepts like critical thresholdsbottlenecks, the safety factor, and learning from failure.

Summary: the epitome of the mental models approach is the fact that all of value investing is, in one way or another, founded on the premise of the “ margin of safety” – a concept shamelessly ripped off, mostly without attribution, from the field of engineering, which laid claim to it first.  Despite Munger’s love of engineering, I’ve heard relatively few investors or businesspeople outside the field ever reference a book explicitly about it.

Although I’m sure there are many good ones, Petroski’s To Engineer is Human offers a lot of bang for your buck: in 200 easily readable pages (excluding those that I mark in “reading tips” for skipping), you’ll learn a lot of useful, translatable lessons about the process that engineers use to ensure the safety and stability of skyscrapers and planes alike.

Highlights: The discussion of key engineering concepts and mentalities is very well-done: thorough without being dry or overly technical/detailed, and, in particular, cognizant of the broader realities at work.  Engineers are often accused of being tunnel-visioned; Petroski is anything but. He doesn’t have quite the intellectual breadth of Renaissance man Don Norman, but I found this to be a very useful and enjoyable book that drove home a lot of important concepts and inspired me to think broadly about various important topics.

Lowlights: While the writing is quite good/readable, some of it is stereotype-breaking in a bad way: the book has an unfortunate tendency to go into little philosophical / poetry-related diversions at times, which I found irritating (readers with more appreciation for poetry and whimsy may have a different experience).  That said, these sections are relatively easy to skip/skim and on the whole, the book is an excellent treatment of an important topic.

Mental Model / ART Thinking Points: failure / mistakes, vividness / salienceopportunity costs /tradeoffsmargin of safetyprecision vs accuracyculturescientific thinkingfeedbackinversion,trait adaptivityfeedbackbottleneckstructural problem solvingideologynonlinearityhumans vs. econsmulticausality

You should buy a copy of To Engineer is Human if: you want a quick book on engineering that packs a big, thought-provoking punch.

Reading Tips: Chapter 2, “Falling Down is Part of Growing Up,” is whimsical in a pointless/annoying way and should have been eliminated by the book’s editor, as it’s utterly irrelevant, intellectually insubstantial, and slows the reader from getting into the meat of the book.  After finishing Chapter 1, skip straight to page 21 to start Chapter 3 (“Lessons from Play, Lessons from Life.”) Other poetry and literary references (such as the discussion of Icarus flying too close to the sun) should be heavily skimmed/skipped as well, as they add little value.

Pairs Well With:

Don Norman’s “ The Design of Everyday Things” – a luminary book spanning design and engineering to provide a broader mindset for structurally preventing human error from causing failure.

Charlie Munger’s “ Poor Charlie’s Almanack ( PCA review + notes) – includes lots of applications ofmargin of safety in different concepts.

Atul Gawande’s “ The Checklist Manifesto” ( TCM review + notes) – a thoughtful, cross-disciplinary analysis of how a simple tool (checklists) can obviate failure under a narrow set of specific circumstances.

Jonathan Waldman’s “ Rust: The Longest War” ( Rust review + notes)- a great look at engineering in various real-world contexts, including how can-makers avert failure and a deeper look at the concept of “nondestructive testing” briefly referenced by Petroski.

Richard Rhodes’ “ The Making of the Atomic Bomb” ( TMAB review + notes) – a detailed look at one of the most famous (and perhaps impressive) concerted engineering efforts of all time

Reread Value: 4/5 (High)

More Detailed Notes + Analysis (SPOILERS BELOW):

IMPORTANT: the below commentary DOES NOT SUBSTITUTE for READING THE BOOK.  Full stop. This commentary is NOT a comprehensive summary of the lessons of the book, or intended to be comprehensive.  It was primarily created for my own personal reference.

Much of the below will be utterly incomprehensible if you have not read the book, or if you do not have the book on hand to reference.  Even if it was comprehensive, you would be depriving yourself of the vast majority of the learning opportunity by only reading the “Cliff Notes.”  Do so at your own peril.

I provide these notes and analysis for five use cases.  First, they may help you decide which books you should put on your shelf, based on a quick review of some of the ideas discussed.  

Second, as I discuss in the memory mental model, time-delayed re-encoding strengthens memory, and notes can also serve as a “cue” to enhance recall.  However, taking notes is a time consuming process that many busy students and professionals opt out of, so hopefully these notes can serve as a starting point to which you can append your own thoughts, marginalia, insights, etc.

Third, perhaps most importantly of all, I contextualize authors’ points with points from other books that either serve to strengthen, or weaken, the arguments made.  I also point out how specific examples tie in to specific mental models, which you are encouraged to read, thereby enriching your understanding and accelerating your learning.  Combining two and three, I recommend that you read these notes while the book’s still fresh in your mind – after a few days, perhaps.

Fourth, they will hopefully serve as a “discovery mechanism” for further related reading.

Fifth and finally, they will hopefully serve as an index for you to return to at a future point in time, to identify sections of the book worth rereading to help you better address current challenges and opportunities in your life – or to reinterpret and reimagine elements of the book in a light you didn’t see previously because you weren’t familiar with all the other models or books discussed in the third use case.

Pages vii – viii: Petroski cleanly lays out what readers should take away from the book: an answer to the question “what is engineering,” with a particular focus on failure mistakes:

“the concept of failure […] engineering design has as its first and foremost objective the obviation of failure.”  

He notes (as many have in other contexts) that we learn more from failures than from successes.

Pages 4 – 5: Petroski discusses the vividness heuristic / salience bias, in non-explicit terms: structural failures like the Hyatt Regency walkway in Kansas City (or, perhaps if the book had been written today, he would’ve pointed to the I-35 collapse in Minnesota), are only so notable because they don’t happen more often.

Petroski states that structural failures of concrete-reinforced buildings in the first world are probably on the order of one in a million to one in a hundred trillion per year; that equates to 25 deaths per year in the U.S. – far lower than the 50K lost lives in automobile accidents at the time of the book’s writing.

Page 6 On tradeoffs /  opportunity costs: Petroski makes explicit the difference between making things as safe as possible and as safe as we’re willing to pay for.   Margin of safety is not a free lunch and there is, in fact, such a thing as too much.

Petroski notes that: 

“all bridges and buildings could be built ten times as strong as they presently are, but at a tremendous increase in cost […]

since so few bridges and buildings collapse now, surely ten times stronger would be structural overkill.”

Reminds me a bit of Richard Thaler discussing the value of human lives in statistical terms… the current number is $7 million.

Page 7: Petroski discusses the crumbling state of American infrastructure, which I guess has been a thing for a really long time.  Jonathan Waldman’s Rust: The Longest War (Rust review + notes) and this hilarious segment from John Oliver.

Page 10: on the fear of failure: don’t run away from it

Page 17: on blame and reasonable use cases… not particularly notable but it did spark me thinking about people having unrealistic expectations about the role of humans to avoid errors in other systems (a la Don Norman)

Pages 21 – 22: on fatigue and precision vs. accuracy: it is relatively predictable that after use for a certain amount of time, certain objects will fail by fatigue, but due to individual variations in the object and how it is used, the failure rate will approximate a bell curve

Page 24: on the tendency of more-used keys in a children’s language toy to break.  A good design solution (although perhaps one that would’ve cost more than it was worth) would’ve been to reinforce the keys corresponding to the letters most often used.  Reminds me a bit of the discussion of the frequency of letters in language.

Pages 27 – 28: back to the idea of trade-offs: for relatively non-critical products like shoelaces or lightbulbs, we generally tend to optimize for cost and functionality rather than durability (Petroski points out that a shoelace that would never break might be unreasonably thick or costly).  On the flip side, things like tires and brakes need to be engineered much better.

Back to the Norman parallel about design expectations:

“only when we set ourselves such an unrealistic goal as buying a shoelace that will never break, inventing a perpetual motion machine, or building a vehicle that will never break down [do] we appear to be fools and not rational beings.”

 So why don’t we apply this concept more broadly?

Page 29: Ex ante (beforehand), it’s only possible to know approximately how long something will last: the actual lifetime is only observable ex post facto (after the fact).  

Pages 30 – 31: Petroski notes that engineering “deal with lifetimes and that:

“one of the most important calculations of the modern engineer is the one that predicts how long it will take before cracks or simple degradation of its materials threaten the structure’s life.”  

Structures are obviously designed relative to their length of expected use; bridges get much more robustness than, say, a child’s plastic toy.  

Reminds me a bit of Jonathan Waldman, on page 11 of Rust, paraphrasing Alan Weisman’s Without Us to discuss the timeframe on which human-created structures would fail without our upkeep (anywhere from 20 – a few hundred years for bridges in NYC, and everything aboveground would be gone in a few thousand years.)

Pages 42 – 43: as in science, engineering hypotheses (ex. “this building is structurally sound”) can never be proven right, since failure is always a possibility in the future.  They can, however, be proven wrong (if a bridge falls down!)

Page 45: a “beam” spans space and resists forces that act transverse to its length… for example, a floor beam

Page 46: in the modern world, following standard practices is usually reasonable (see culture).  But if we are on a desert island and have to reason from first principles, a few things come up.  First of all, beams are sturdier with their deep side rather than flat side bearing the floor’s weight.

Pages 47 – 48: Additionally, beams face the potential for buckling (from crossward forces), which could be prevented with bracing.

Also, you can’t necessarily just scale up a design and assume it will work.

Pages 50 – 51: Galileo discovered that a beam’s strength is proportional to the square of the depth of the beam.  Interestingly, trees are circular because circular trunks have equal resistance to the wind regardless of direction.  Skyscrapers aren’t circular because wind resistance is: 

“seldom as dominant as architectural or functional factors in determining the shape of a tall building.”

Again, tradeoffs as well as  salience: as one expert quipped in David Oshinsky’s Bellevue (BV review + notes), nobody ever picked a hospital for its generator!

Petroski goes on to note that apparently right answers can be gotten through wrong reasoning,” citing the example that since both 2 x 2 and 2 + 2 = 4, we could hypothesize that n^2 = 2n.  If we don’t test additional examples, we’ll find out we’re wrong.  An example of scientific thinking.

Page 55: interesting anecdote about why the Bent Pyramid might be bent – because it started to fail…

Pages 57 – 58: railroads catalyzed the need for substantially stronger bridges and created an autocatalytic  feedback loop: better bridges enabled more economically profitable train routes, which led to bigger and faster trains and created the need for more bridges…

Pages 62 – 63: Petroski proposes that contrary to popular opinion of the conservative engineer, engineers are actually always pushing the limit of what’s possible… when doing so, you can’t learn through trial and error but have to make calculations ahead of time.

The takeaway is that engineers have to not only learn how to obviate failure of materials, but also failure of the mind.

Pages 64 – 66: on decision-making via optimization for various factors, and inversion (i.e. exclusion of unwanted factors).  A good practical example, using a family planning a vacation.

Page 68: when shifting from one material to another (such as from wood or stone to iron), different rules start to apply and you can’t simply use new technology in old ways and expect it to work.  This is mentioned mostly in passing and Petroski doesn’t make the broader concept explicit, but I think it’s actually super interesting/ useful in the context of the modern world.   Trait adaptivity.

Page 69: lots of iron bridges collapsed at first, but they eventually came to be seen as superior to wooden bridges

Page 77: on failure as the path to success: Petroski analogizes to the process of writing a manuscript:

what other authors tend to learn from the manuscripts and drafts of the masters that cannot be learned from the final published version of a work is that creating a book can be seen as a succession of choices and real or imagined improvements.”  

Schema bottlenecks as well as  feedback / iteration.

Pages 83 – 84: in discussing learning from failures, Petroski notes that: 

“small cracks in reinforced concrete do not necessarily pose any danger of structural collapse, for the steel will resist any further opening of the cracks.

 But cracks do signify a failure [… and] incontrovertibly disprove the implicit hypothesis that stresses high enough to cause cracks to develop would not exist anywhere in the structure.”  

This is not necessarily true for other materials: for example, the famous O-ring failures on the Challenger which Richard Feynman famously demonstrated with the ice-water experiment.  

It’s also worth noting when margin of safety does and doesn’t apply: on pages 155 – 157 of The Pleasure of Finding Things Out (PFTO review + notes), Feynman discusses NASA misapplying margin of safety by pointing out that parts that weren’t supposed to crack at all cracked only a third of the way to failure.

Petroski also cites (on the next page) a quote from T. H. Huxley’s On Medical Education that sounds very Mungerish:

“There is the greatest practical benefit in making a few mistakes early in life.”

Pages 87 – 88: The failure of the Hyatt walkway in Kansas City was caused by a design change from two walkways supported by one hanging rod to a rod hanging off a beam supported by another rod, mostly because the original design (requiring the threading and installation of a 45-foot-rod) was unwieldy. 

This change caused different forces to become relevant: Petroski makes the analogy to, rather than two people hanging onto one rope, one person hanging onto a rope tied to another person hanging onto another rope, such that the strength of the upper person’s grip becomes important to the lower person (no matter how strong their rope, or their grip on it).  

The skywalks would apparently have had a very low margin of safety even if not for this, so this change led to failure (he officially defines the safety factor soon.)

Pages 90 – 91: The Engineering News-Record (or ENR – a still-published trade resource that I used to subscribe to at my old shop; I actually attended and enjoyed one of their old conferences) received a lot of reader suggestions on how the issue could have been avoided, but Petroski notes that it’s easier to solve a “puzzle” (why the walkways failed) in hindsight than it is to notice, beforehand, all the possible ways that it could have failed.  See  hindsight bias.

Pages 92 – 95: starting to approach a formal definition of safety factor, Petroski notes that:

“had the structure not been so marginally designed, the other rods might have redistributed the unsupported weight among them, and the walkway might only have sagged a bit at the broken connection.”  

He goes on to note an example of redundancy: i.e.,

designers often try to build into their structures what are known as ‘alternate load paths’ to accommodate the rerouted […] stress and strain when any one load path becomes unavailable for whatever reason.  

When alternate load paths cannot take the extra traffic or do not even exist, catastrophic failures can occur.”

Petroski goes on to provide some other useful, thought-provoking examples.

Pages 96 – 97: identifying and strengthening weak links (bottlenecks) leads to structures that are stronger / less likely to fail.  (can substitute “systems” for “structures” with no change in meaning.) In this way, weak links in engineering can be thought of as analogous to bottlenecks: if they don’t go, the bigger system doesn’t go either.

Pages 98 – 101: At the start of chapter 9, Petroski finally formally brings up and defines the “factor of safety” – which can alternatively be referred to as a “factor of ignorance.”  The factor of safety is: 

“calculated by dividing the load required to cause failure by the maximum load expected to act on a structure.  Thus if a rope with a capacity of 6,000 pounds is used in a hoist to lift no more than 1,000 pounds at a time, then the factor of safety is 6,000 / 1,000 = 6.”

Petroski goes on to explain why the factor of safety is important: the rope might be weaker than specified, a heavier load might be lifted (in a jerky manner that would increase the forces on the rope), and so on.  The factor of safety is a catch-all factor that mitigates the risk of both the “known unknowns” (situations that are reasonably likely to come up) and “unknown unknowns” (situations that cannot be predicted ahead of time).

There is no universal factor of safety that is appropriate in all circumstances, as Petroski alluded to on pages 27 – 28.  See Jordan Ellenberg in How Not To Be Wrong (HNW review + notes) on marginal utility, and why we spend too much time sitting in airports.

The factor of safety appropriate for an airplane flying at high speed at high altitude (which must continue functioning even under extremely adverse and unlikely conditions) will necessarily be much higher than the factor of safety appropriate for a sneaker shoelace or plastic child’s toy (where the cost of making it as durable and resilient as an airplane or a bridge would likely make it price-prohibitive for its intended use).

Finally, quite briefly, Petroski returns to the idea of the weakest link or bottleneck – don’t miss the bottom of page 101, where he notes that:

“it will be the smallest factor that is spoken of as the factor of safety of the structure”

(since a structure is only as strong as its weakest critical part).

Pages 110 – 111: Petroski discusses nucleation sites, which I’ve heard in the context of bubbles but not metal.  

More importantly for those of us who aren’t science nerds like me, he discusses the concept of critical thresholds – an example of nonlinearity – in the context of fatigue, he notes that there is a threshold of stress below which failure is never observed no matter how many cycles of loading are applied.”  

In practical everyday terms, an example might be a reasonably fit person walking at a slow pace on a soft surface such as lush grass while carrying no weight: if that is the level of stress you’re placing on your muscles and joints, it is extremely unlikely that you will sustain any injuries no matter how long you walk.  This would be below the critical threshold.

On the other hand, if you’re above the critical threshold and doing some moderately intense activity, if you do it enough times (say, running every day for months without a break), you might eventually experience failure.  If you’re way above the critical threshold – for example, doing some intense weightlifting, or running really fast, merely a few cycles may cause damage.

Petroski notes that in most cases, it’s not reasonable (from a cost/practicality perspective) to:

overdesign structures so that peak stresses never exceed the threshold level.”  

Thus, inspection and maintenance become important.  See  margin of safety, and of course, “ Rust: The Longest War.”

Stepping outside the book for a moment, other times, critical thresholds work in the opposite direction – i.e. everything below a certain threshold represents “failure” and everything above a certain threshold represents “success.”  One example is escape velocity for a rocket being launched into space; for a startup venture, it could be the point at which it is self-funding and thus sustainable.

Another example of a critical threshold is a nuclear chain reaction, as discussed many places in “ The Making of the Atomic Bomb (TMAB review + notes) – for example, on page 436.

Pages 112 – 113: Petroski discusses “nondestructive testing,” or NDT – i.e. using various techniques to identify potential points of failure within a structure.  Jonathan Waldman discusses the example of “pigging” pipelines in the aforementioned Rust: The Longest War Rust review + notes.  Publicly-traded companies such as Team (TISI) and Mistras (MG) provide these services; their websites and securities filings provide additional color for anyone interested.

Pages 114 – 116: Another example of critical thresholds: some metals can become brittle and crack below a certain temperature, which has applications for building ships or nuclear power plants.  The fuel icing discussed in Atul Gawande’s The Checklist Manifesto –  TCM review + notes – is another interesting example.

Page 119: an example of structural problem solving at work is “leak-before-break,” or designed-in failure.  Petroski notes in the context of building a nuclear power plant that:

“if a certain type of ductile steel is used for the pipe wall, any crack that develops will grow faster through the wall of the pipe than in any other direction.  This ensures that a crack will cause a relatively small but detectable leak well before a dangerously long crack can develop.”

This is a fascinating concept.   Margin of safety  salience x  structural problem solving.  I’m trying to think of other applications – perhaps intentionally spreading false information in the context of politics to figure out whether or not there are leaks/moles…

Page 122: Petroski here, very briefly, does something that I had wondered if anyone else had ever done: i.e., applied the concept of margin of safety to athletics.  Specifically, he quips that marathon runners pace themselves to finish:

“without an ounce of strength to spare, to run with virtually no factor of safety, to […] finish just as they are about to drop from fatigue.”

There’s obviously a tension between pushing limits, as athletes do, and putting yourself at risk of injury… I take the approach of preferring to be injury-free and most always keeping myself below that “ critical threshold” described earlier.  Many of my friends take the opposite approach.

Another reason why  willpower is dose-dependent.

Pages 127 – 128: interesting discussion of flexure and knife blades; weak links in another context

Page 131: on making steel.  Reminds me a bit of the section on the history of steel in Waldman’s Rust.

Page 137: the ribbed structure of some water lilies means you can stand on them; the Crystal Palace for the Great Exhibition was conceived partially due to a building based an architectural design based on the lily’s pattern of ribs and crossribs

Page 145: file away in the “science doesn’t have to be fancy” folder: to test the safety of the newfangled design of the galleries of the Crystal Palace, a section of the gallery was constructed just off the floor (so that nobody would get hurt), and then 300 workmen stood on top of it in various configurations, including jumping up and down… it didn’t move more than a quarter of an inch.

Page 151: on innovation, Petroski describes Paxton as being a little bit like the fictional Howard Roark or Henry Cameron:

“Because Paxton was not steeped in the traditions of either engineering or architecture, he approached design problems without any academically ingrained propensity for a particular structural or aesthetic style [… he] struck out in brilliant new directions that produced models for the architects and engineers of the next century,”

including the use of metal/glass, using outer walls that didn’t provide structural strength, and using modular/prefabricated units.

Culture  status quo bias.

Page 174!: in discussing the failure of a semisubmersible (semisub) offshore oil rig, Petroski notes (again in an example of critical thresholds) that many structures can survive with fatigue cracks until they reach a certain size and the structure suddenly just “let go.”

In this case, interestingly, an existing crack was known to have existed because it was painted over.

Pages 178 – 180: Again in an example of bottlenecks – the de Havilland Comet 3 had several instances of in-air disintegration due to unanticipated: “high stresses associated with rivet holes near the window openings in the fuselage.”  

The plane was redesigned with reinforced window panels and thereafter flew safely.

Pages 180 – 181: a good quote on ideology and  status quo bias that applies to people who aren’t scientists, too:

“Technologists, like scientists, tend to hold on to their theories until incontrovertible evidence, usually in the form of failures, convinces them to accept new paradigms.”

Page 184: another good quote on the mindset of avoiding failure and not being too sure in a single cause, i.e. multicausality  and  inversion  and  scientific thinking and  probabilistic thinking:

“finding the true causes of failure often take[s] as much of a leap of the analytical imagination as original design concepts.  And collective assent to a plausible but not incontrovertible explanation for a structural failure can allow further generic accidents as readily as can the collective but unsubstantiated belief by a design team that they have anticipated all possible means of failure.”  

Page 190: discussing slide rules and significant digits (in a way that is much more thoughtful/interesting than the usual academic treatment), Petroski notes that: 

“answers are approximations and should only be reported as accurately as the input is known, and, second, magnitudes come from a feel for the problem and do not come automatically from machines or calculating contrivances.”

See precision vs. accuracy. This reminds me of Feynman figuring out a solution qualitatively before figuring it out quantitatively – or value investors eschewing precise(ly wrong) academic valuation approaches in favor of a more thoughtful, practical real-world approach.

Pages 193 – 195: Petroski forecasts, correctly, that “the trend is clearly that eventually no engineer will own or use a traditional slide rule, but that practicing engineers of all generations will use – and misuse – computers.”

Petroski notes both the garbage-in, garbage-out phenomenon of modeling, as well as the problem with relying on black-box quantitative answers you don’t qualitatively understand:

“should there be an oversimplification or an outright error in translating the designer’s structural concept to the numerical model that will be analyzed through the automatic and unthinking calculations of the computer, then the results of the computer analysis might have very little relation to reality.  

And since the engineer himself presumably has no feel for the structure he is designing, he is not likely to notice anything suspicious about any numbers the computer produces for the design.”

In terms of cross-application, there is some similar discussion in Jerome Groopman’s “ How Doctors Think –  HDT review + notes – regarding Bayesian reasoning and other topics; there was also a really clear example from a PM who is a friend of mine.  He once told me that one of the differences between him and the analysts who work, and have worked for him, is that he realizes that the output of his model is just a number – it’s related to the value of the company, sure, but it’s not the value.  Again,  precision vs. accuracy.

Petroski also brings up the concept of optimization, which y’all know I love.

Pages 205 – 207: Petroski provides an interesting and useful list of many of the potential causes of structural failures (though, as he’d be careful to point out, certainly not the only types of possible failures.)

Page 214: in concluding on the causes of failures, Petroski mentions incentives, i.e. the human element, as well as Don Norman / Richard Thaler’s arguments around  margin of safety x “ Humans vs. Econs” as one that’s worth considering as well, in a fashion not unlike Charlie Munger’s two-track analysis.  Petroski:

“books of case studies and lists of causes of failures do not easily incorporate this synergistic [attempt to deal explicitly with the human] element, yet the motives and weaknesses of individuals must ultimately be taken into account in any realistic attempt to protect society from the possibilities of major structural collapses.”

Page 218: back to tradeoffs

Page 224: on the Munger paradigm of rubbing your nose in your failures to learn from them


First Read: early 2018

Last Read: early 2018

Number of Times Read: 1


Review Date: early 2018

Notes Date: early 2018