Learning Potential / Utility: ★★★ (3/7)
Readability: ★★★★★★ (6/7)
Challenge Level: 1/5 (None) | ~180 pages ex-notes (240 pages official)
Blurb/Description: Donella Meadows is probably pretty smart, but this book is little more than an extended glossary and won’t teach you much about how to practically think about systems unless you’ve literally read nothing else on the topic.
Summary: It’s important to differentiate between someone being smart and good at what they do, and someone’s book being interesting and useful to readers. For example, Daniel Kahneman has done some top-notch research on cognitive biases, but I die a little inside every time I hear someone recommend Thinking Fast and Slow, which is a uniformly terrible and tedious book relative to the plethora of engaging reading options on the topic of cognitive biases like How Doctors Think or Misbehaving, to start with.
In this case, Thinking in Systems leaves me very disappointed: it’s clear that Donella Meadows is thoughtful and knows her stuff when it comes to systems thinking, but the book suffers from a number of major flaws (detailed in “Lowlights” below) that rendered it unhelpful to me. Summarily, it felt like little more than an extended glossary or Wikipedia page: it hit a lot of the general ideas in systems thinking, but didn’t provide sufficient depth or concrete examples to drive the points home, leaving me feeling no more educated about the topic than I did when I started reading. Readers should instead turn to a book like Geoffrey West’s Scale, which – despite being substantially longer – will teach you a ton.
Also, because it doesn’t fit neatly into “Lowlights” or “Highlights,” it’s worth mentioning that the book actually has far less of an enviro-hippie tone than you might be led to believe from the publisher’s suggested reading and so on. Donella Meadows is clearly an environmentalist, and vaguely Malthusian and Marxist lines of thinking do pervade the book, but there are two caveats.
First, it’s almost always implicit and not in-your-face rather than explicit, with a couple laughable exceptions, like when Meadows advocates for negative GDP growth as being a good thing, and when she claims that the invention of interest and discount rates incentivized short-termism – far from it; financial discount rates are far lower than the hyperbolic rates we humans use implicitly anyway.
Second, she’s welcome to have her own beliefs so long as they don’t interfere with the topic at hand; in fact, being exposed to “disconfirming” beliefs is actually something that I strive to do from time to time to keep my eyes open to possible flaws in my own understanding of the world. Meadows’ politics, while I may disagree with them, do not translate into bias in the book. I did not see any examples of this occurring – unlike, for example, Siddhartha Mukherjee allowing his social-justice views to slant The Gene’s presentation of topics like IQ in a very confusing and even misleading way, Meadows’ politics can be easily separated from her process: she and I can completely see eye to eye on how systems work and the best way to deal with certain systemic problems, while completely disagreeing on what the goals of those interventions may be. And that’s perfectly okay. So while there are many reasons not to read this book, the author’s politics are not one of the negative factors in my view, and shouldn’t put you off if you’re on the fence.
Highlights: If you’re looking for a handy glossary, this might be it.
Lowlights: The major flaws here can be summarized in three categories.
First, Meadows doesn’t do a very good job of concretizing either the systems concepts or her suggested interventions; other than a few brief examples on predictable topics (like deforestation and overfishing, or the war on drugs), she prefers to paint in very broad strokes rather than providing specific, concrete examples of this kind of thinking in action.
Second, there’s not a lot of depth to the book: while she clearly knows what she’s talking about, the book feels like an extended glossary, or a Wikipedia summary, rather than a thorough, thought-provoking book about how to practically think about systems and make decisions accordingly. It’s rare that I think books should be longer rather than shorter, but I feel like Thinking in Systems is an unhappy medium between “too long to be a survey course” and “too short to be intellectually substantial.” While the book may have value to first-time readers who’ve never encountered the idea of a feedback loop before, anyone who’s reasonably well-read probably won’t come across a ton of new or fresh insights in the book.
Third and finally, I think – ironically – that some of the assumptions or techniques are a little reductionistic for the real world. For example, it’s not clear to me that every type of system can be modeled so simply based on “stocks and flows” – much of the “stocks” bit, particularly in an economic sense, smacks of last-century thinking before intangible assets became, in many instances, the most valuable ones that drive our economy. Even in a less economic context, it’s difficult for me to translate, for example, this kind of thinking to how social media works: while items like “self esteem” or “reputation/status” could conceivably be the stocks that flow, it’s not immediately clear that they’re that quantifiable or one-to-one; in the vein of Covey win-win, many human traits are abundant rather than scarce. It’s certainly possible that Meadows’ approach could be applicable more broadly than I immediately see potential for, but the book’s age (and lack of depth) made it hard to take away much useful.
Mental Model / ART Thinking Points [link to models and ARTs]: three or four items (not sentences) with brief descriptions: ethics of war, scale effects,
Instead, you should read: The most comparable book would be Geoffrey West’s Scale, a fantastic book that analyzes the scale properties of various systems.
However, Scale requires some previous reading, I think; newer readers might start with other books like Howard Marks’ The Most Important Thing, Eli Goldratt’s The Goal, Siddhartha Mukherjee’s The Gene, Don Norman’s The Design of Everyday Things, and Richard Thaler’s Misbehaving and Nudge.
If you’ve spent much time on this site, you are probably aware of my quasi-vendetta against Mukherjee’s pervasive bias in The Gene, but it still represents a great discussion of how feedback works in biological systems. Meanwhile, The Goal covers topics like local vs. global optimization and bottlenecks in far more depth and concreteness than Thinking in Systems, and The Most Important Thing provides a lot of implicit systems thinking and discussion of feedback loops as well. Finally, DoET and Misbehaving/Nudge are great systems books in their own right.
Reading Tips: don’t read the book to begin with. Besides that, if you do choose to read it, it’s short enough that you don’t need to read it in any specific way.
Reread Value: 1/5 (None)
More Detailed Notes + Analysis (SPOILERS BELOW):
Please remember: these notes were created primarily for my own personal reference and are not intended to be an abstract or summary of the book; in other words, they don’t substitute for reading the book, and most of their content will not make sense without the broader context of the book. These simply represent some of the points that I found interesting / thought-provoking / related to other material that I’ve learned from.
I share them for a few reasons: first, and primarily for those who’ve read the book, hopefully these will serve as some thought-provoking marginalia as well as a “refresher course” on some of the concepts if it’s been a while since you’ve read the book. Second, in more limited circumstances, if you haven’t read the book but have seen it referenced in one of the mental models or other pages in Poor Ash’s Almanack, you may find the notes to be a useful “information bridge” (albeit a very temporary/rickety one) until you’re able to read the book yourself.
Pages 4 – 5: Donella Adams generally has the same viewpoint as Don Norman; she’s just not as deep or insightful in the way she presents it. Her view of the world is that many large-scale problems “persist in spite of the analytical ability and technical brilliance that have been directed toward eradicating them. No one deliberately creates those problems, no one wants them to persist, but they persist nonetheless. That is because they are intrinsically systems problems.” Unfortunately, this is almost the high point of the book, and not a ton of useful information is provided thereafter…
Pages 11 – 14, 16: Meadows defines a system as “an interconnected set of elements that is coherently organized in a way that achieves something.” She answers the Feynman question about what we are if our atoms are recycled rapidly with a discussion similar to that of the analogy about the boat and the planks in Sam Kean’s The Violinist’s Thumb.
In her “stocks and flows” model, Meadows notes that stocks don’t have to be physical, but doesn’t do a great job of providing concrete examples of such.
Page 18: Meadows focuses her discussion on what seems like a fairly narrow “stock and flow” model – a stock represents some quantity of something; flows can be inflows or outflows. Like water into and out of a lake or bathtub.
Page 21: dynamic equilibrium is defined as when inflows equal outflows and the system’s level is stable.
Page 22: She makes the fairly obvious point that the stock will grow if inflows exceed outflows and vice versa; more interesting is the example of inversion discussed at the bottom of the page – reducing outflows is thus as good as increasing the stock; for example, via energy efficiency.
Page 23: She notes that given that flows take time, changes also take time.
Page 24: Stocks, like inventory, act as a buffer.
Page 25: Feedback occurs “when changes in a stock affect the flows into or out of that same stock.”
Page 29/31: feedback loops can be “balancing” or “reinforcing” – a balancing feedback loop is one that inhibits itself (for example, most biological processes). Reinforcing feedback loops are like arms races.
Page 32: Meadows shows how exponential growth works.
Page 33: Meadows discusses the “rule of 72” (she uses 70 instead).
Pages 38B – 39T: Don Norman has discussed the bad design of things like thermostats…
Page 39: Meadows points out that delays mean that “a person in the system who makes a decision based on the feedback can’t change the behavior of the system that drove the current feedback; the decisions he or she makes will only affect future behavior.” Thus, it’s important to know how long it will take for changes to show up.
Pages 44 – 45: Meadows discusses the idea of “dominance” – essentially, when you don’t have dynamic equilibrium, one loop is dominating another.
Pages 53 – 57: In one of the few useful concrete examples in the book, Meadows discusses the example of a car dealer trying to balance inventory. The sort of statistical fluctuations discussed by Eli Goldratt’s Jonah in The Goal lead to big fluctuations; these are counterintuitively solved by using longer rather than shorter average periods.
Page 59: nothing goes on forever; as some investors say, “trees don’t grow to the sky.” Meadows points out that any physical, growing system will “run into some kind of constraint, sooner or later.” She differentiates renewable from nonrenewable constraints, stocks, etc.
Page 64: another example of inversion: in discussing resource extraction curves, Meadows notes that technology bringing recovery costs down results in the same behavior as prices going up.
Pages 66 – 71: Meadows discusses overfishing; Cod is a good example of this.
Page 76: The concept of “resilience” is discussed – a “measure of a system’s ability to survive and persist within a variable environment”
Pages 79 – 81: self-organization is defined as “the capacity of a system to make its own structure more complex,” but Meadows doesn’t deign to provide any useful explanation other than citing obvious examples like DNA.
Pages 82 – 84: “hierarchy” is what you think it is… systems comprised of smaller systems.
Page 85: Meadows discusses “suboptimization” – a subsystem’s goals overriding the total system’s goals. A better way of phrasing it is local vs. global optimization, discussed much better in The Goal.
Pages 86 – 87: Meadows uses the term “mental model” somewhat differently than we would use it, but it’s still a conception of the world… all models are wrong some models are useful etc etc. Still nothing uniquely insightful here.
Page 89: Howard Marks discusses the concept of second-level thinking in The Most Important Thing far better than Meadows does here.
Pages 91 – 94: Meadows very briefly discusses nonlinearity, in no particular depth.
Page 95!: Here is one bit of the book that is genuinely insightful: stepping away from her narrow and not well presented or explained models, Meadows points out that those models’ boundaries “rarely marka real boundary, because systems rarely have real boundaries. Everything, as they say, is connected to everything else, and not neatly.” Again, though, she doesn’t take this point to any particularly helpful concrete conclusions… if you want a good read on this topic, Richard Thaler’s Misbehaving provides a lot of examples of economists artifically drawing boundaries around their own field of study. Thomas Kuhn’s The Structure of Scientific Revolutions might be worth a look here too.
Pages 97! – 99!: Again, the one insightful/useful takeaway here is that it can be a problem to do too little (i.e. overly narrowly define your problem), or try to do too much (i.e. try to solve life, the universe, and everything.”
Page 101: back to mediocrity: Meadows mentions the concept of a “limiting factor” (i.e. a bottleneck or limiting reagent), but provides little to no useful discussion thereof.
Page 103: Meadows, via Jay Forrester – however long you think the delay in a system is, multiply by three. (A bit like my rule of thumb about turnarounds: they take twice as long and cost twice as much as you expect…)
Pages 106 – 107: she brings up, again in not much depth, the concept of bounded rationality and satisficing. Again, Misbehaving is the book to read here.
Page 109: One useful anecdote: information availability can change behavior. there’s actually a whole industry called “demand response” that applies behavioral economics to energy efficiency
Page 111: again, a nugget of insight that isn’t expanded on: “the world is nonlinear. Trying to make it linear for our mathematical or administrative convenience is not usually a good idea even when feasible, and it is rarely feasible.”
Pages 113B – 114T: arms races exist…
Page 114B: not sure her solution to arms races is that helpful
Pages 116 – 117: the tragedy of the commons, an incentives problem, is discussed, though not in a particularly unique way.
Pages 124 – 126: arms races exist
Page 128: This is one of the few places where “vaguely” Malthusian/Marxist thinking becomes explicit… oddly enough, though, Peter Thiel and Meadows are on the same page (partially) here: they both agree that “market competition systematically eliminates market competition.” They just disagree whether or not that’s a good thing.
Pages 131 – 133: Local vs. global optimization problems sort of discussed here. Just not well.
Pages 136 – 137: “gaming” the system is discussed here. Again, way better discussions available elsewhere… Munger, for example.
Page 138: this I can agree with: systems produce their goals; if their goals are wrong, productivity is irrelevant. But, again, this is non-unique and non-insightful; The Goal is literally about this.
Pages 145 – 146: a “leverage point” is somewhere where low effort can lead to high impact
Pages 149 – 150: buffers can stabilize systems, but can also cause slow reactions and inflexibility
Page 156: one of the examples of Meadows and I agreeing on how a system works, but disagreeing on the proper ramifications. Cutting down successful people at the legs is not appropriate behavior.
Pages 156 – 157: information is useful
Pages 162 – 164: paradigms lead to goals. She references Kuhn.
Pages 171 – 172: she advocates studying systems longitudinally and exposing assumptions to “the light of day”
Page 176: “pretending something doesn’t exist if it’s hard to quantify leads to faulty models” – again, see Misbehaving
Page 182: The “negative GDP growth is good!” comment is not the dumbest thing in this book. The dumbest thing is this:
“One of the worst ideas humanity ever had was the interest rate, which led to the further ideas of payback periods and discount rates, all of which provide a rational, quantitative excuse for ignoring the long term.”
Clearly Meadows has never heard of hyperbolic discounting. Interest rates and typical investment discount rates actually wildly incentivize thinking long-term relative to normative human behavior in the absence of financial thinking. >_<
First Read: early 2018
Last Read: early 2018
Number of Times Read: 1
Review Date: early 2018
Notes Date: early 2018