Skip to content

Systems Thinking and Impact Analysis

Here's a confession: the first time you try to draw a causal loop diagram, it will probably look like spaghetti thrown at a wall. That's normal. Everyone's first CLD is a mess.

But here's the promise: with a little practice, these seemingly tangled diagrams become your best friends when tackling complex "wicked" problems. They reveal something profound—that you don't need brute force to solve problems. You can be clever. You can find leverage points where small changes create big impacts.

After collecting reliable data and learning to measure harm, we face the real challenge: understanding how everything connects. Industries don't cause harm in isolation. They operate within systems—webs of cause and effect, feedback loops, and delays that can amplify small problems into catastrophes or, if we're clever, transform small interventions into system-wide change.

This chapter introduces systems thinking—a way of seeing the world that reveals hidden connections, explains why problems persist despite good intentions, and shows where to push for maximum impact. At the heart of this approach are causal loop diagrams (CLDs)—visual tools that map how variables influence each other in circular, dynamic patterns.

By the end of this chapter, you'll be able to read CLDs, create your own, and use them to find those precious leverage points where minimal effort yields maximal change.

Why Systems Thinking Matters

The Limits of Linear Thinking

Most of us were trained to think linearly: A causes B, B causes C, problem solved. But real-world problems don't work that way.

Consider obesity. Linear thinking says: people eat too much, so they gain weight. Solution: tell people to eat less. But after decades of "eat less, move more" campaigns, obesity rates keep rising. Why?

Because obesity isn't a simple cause-and-effect chain—it's a system:

  • Food industry profits drive marketing of ultra-processed foods
  • Stress from economic insecurity triggers comfort eating
  • Neighborhood design discourages physical activity
  • Sleep deprivation (from overwork) disrupts metabolism
  • Social norms around portion sizes shift over generations
  • Healthcare costs from obesity reduce resources for prevention

Each factor influences the others in loops that reinforce the problem. Linear solutions fail because they address one piece while ignoring how the system pushes back.

What Systems Thinking Offers

Systems thinking provides a fundamentally different approach:

  • See interconnections rather than isolated causes
  • Understand feedback loops that amplify or dampen change
  • Recognize delays between actions and consequences
  • Identify leverage points where small changes matter most
  • Anticipate unintended consequences before they occur

The Systems Thinking Mindset

Instead of asking "What's the cause?" ask "What are the causes, and how do they connect?" Instead of "What's the solution?" ask "What interventions might shift the whole system?"

Complex Systems: The Arena We're Playing In

Before diving into tools, let's understand the terrain. Complex systems are collections of interconnected parts that behave in ways that can't be predicted by looking at the parts individually.

Characteristics of Complex Systems

Complex systems share several features:

  • Many components: Numerous interacting elements
  • Nonlinear dynamics: Small changes can have large effects (and vice versa)
  • Feedback loops: Effects loop back to influence their causes
  • Emergence: System behaviors arise that no individual part "contains"
  • Adaptation: Systems change in response to interventions
  • History dependence: Where you are depends on how you got there

Industries causing harm are embedded in complex systems. The tobacco industry isn't just companies selling cigarettes—it's farmers, advertisers, retailers, regulators, healthcare systems, social norms, and addiction pathways, all interacting dynamically.

System Boundaries

Every analysis requires drawing system boundaries—deciding what's "inside" the system you're studying and what's "outside." This choice matters enormously.

Draw boundaries too narrowly, and you miss crucial connections. Draw them too widely, and analysis becomes impossible.

For example, analyzing tobacco industry harm:

Boundary What's Included What's Missing
Too narrow: Company only Production, marketing, sales Health effects, regulation, social norms
Appropriate: Industry system Companies, regulators, healthcare, consumers, farmers Global trade, other addictive industries
Too wide: Everything All social, economic, political factors Focus, actionability

Boundary Choices Are Value Choices

Where you draw boundaries affects what you see as problems and solutions. Industry lobbyists draw narrow boundaries ("we just sell a legal product"). Public health advocates draw wider ones ("this system produces preventable death"). Be conscious of your boundary choices.

System Components and Interconnections

Within any system, we find system components—the individual elements—and interconnections—the relationships between them.

Components might include:

  • Organizations (companies, regulators, NGOs)
  • People (consumers, workers, executives, politicians)
  • Physical elements (factories, products, infrastructure)
  • Intangible elements (norms, beliefs, information, money)

Interconnections include:

  • Material flows (products, waste, resources)
  • Information flows (advertising, research, regulations)
  • Financial flows (payments, investments, taxes)
  • Influence relationships (lobbying, social pressure, authority)

The structure of interconnections—not just what's connected but how—determines system behavior.

Diagram: System Components Map

Run the Tobacco System MicroSim Fullscreen

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
<summary>Tobacco Industry System Components</summary>
Type: diagram

Purpose: Visualize the components and interconnections of the tobacco industry as a complex system

Bloom Taxonomy: Understand (L2)

Learning Objective: Students will understand how to identify system components and map their interconnections

Layout: Network diagram with categorized nodes and labeled connections

Node categories (color-coded):

1. INDUSTRY (Red)
   - Tobacco companies
   - Tabacco products
   - Advertising agencies
   - Retail outlets
   - Tobacco farmers

2. GOVERNMENT (Blue)
   - Regulatory agencies (FDA, etc.)
   - Tax authorities
   - Public health departments
   - Legislators

3. HEALTH (Green)
   - Healthcare systems
   - Insurance companies
   - Medical researchers
   - Treatment providers

4. SOCIETY (Yellow)
   - Consumers/smokers
   - Youth (potential new users)
   - Advocacy groups
   - Media

Connection types (line styles):
- Solid: Money flows
- Dashed: Information/influence
- Dotted: Products/materials
- Bold: Regulatory relationships

Key connections to show:
- Industry → Consumers (products)
- Industry → Legislators (lobbying money)
- Consumers → Healthcare (illness)
- Regulators → Industry (regulations)
- Researchers → Regulators (evidence)
- Industry → Researchers (funding - problematic)

Annotations:
- "Regulatory capture" near industry-regulator connection
- "Information asymmetry" near industry-consumer connection
- "Externalized costs" near consumer-healthcare connection

Interactive features:
- Click node to highlight all its connections
- Hover to see description of each component
- Toggle to show money flows only, influence only, etc.
- Highlight "problematic" vs "appropriate" connections

Implementation: vis-network JavaScript library

Emergence and Nonlinear Dynamics

Two concepts explain why complex systems surprise us: emergence and nonlinear dynamics.

Emergence

Emergence refers to properties that arise from a system that aren't present in its individual parts. A single neuron can't think, but billions of connected neurons produce consciousness. A single person can't create traffic jams, but many drivers together produce them.

In harmful industries, emergent properties include:

  • Market dynamics: No single company "decides" market prices, but together they emerge
  • Social norms: No one person creates cultural attitudes toward smoking, but collectively they form
  • Systemic risk: Individual company failures can cascade into industry-wide crises
  • Resistance to change: Even when individuals want reform, the system resists

Understanding emergence means accepting that you can't always predict system behavior from parts, and that changing parts may not change the emergent behavior.

Nonlinear Dynamics

Nonlinear dynamics means that effects aren't proportional to causes. In linear systems, double the input gives double the output. In nonlinear systems, double the input might give ten times the output—or half, or nothing.

Examples in industry harm:

  • A small increase in nicotine levels might dramatically increase addiction rates
  • Slow pollution accumulation suddenly triggers ecosystem collapse
  • Gradual norm changes reach a tipping point and accelerate rapidly
  • Modest regulatory pressure causes industry to suddenly shift strategies

Nonlinearity is why prediction is hard but also why leverage points exist. If the system were linear, you'd need proportional effort for proportional results. Nonlinearity means small, well-placed interventions can achieve outsized effects.

Feedback Loops: The Engine of System Behavior

Now we reach the heart of systems thinking: feedback loops. These circular cause-and-effect chains determine whether systems amplify change or resist it.

Understanding Feedback

A feedback loop occurs when the output of a process eventually influences its own input. You experience feedback constantly:

  • Thermostat: Room gets cold → heater turns on → room warms → heater turns off
  • Savings: Money earns interest → more money → earns more interest
  • Rumors: Story spreads → more people tell it → spreads faster

There are two fundamental types: reinforcing loops (also called positive feedback) and balancing loops (also called negative feedback).

Reinforcing Loops (Positive Feedback)

Reinforcing loops amplify change. Whatever direction the system is moving, reinforcing loops push it further in that direction.

Positive feedback doesn't mean "good"—it means "same direction." If something is growing, positive feedback makes it grow faster. If something is shrinking, positive feedback makes it shrink faster.

Notation: Reinforcing loops are marked with (R) and often spiral outward in diagrams.

Reinforcing Loop: Addiction

1
2
3
Nicotine Use → Tolerance → Need More Nicotine → Increased Use → More Tolerance
                          ↑                                            ↓
                          └────────────────────────────────────────────┘
This is a vicious cycle: use leads to tolerance, which leads to more use. The "positive" feedback isn't positive for health—it amplifies the problem.

Balancing Loops (Negative Feedback)

Balancing loops resist change. They push systems toward equilibrium, counteracting disturbances.

Negative feedback doesn't mean "bad"—it means "opposite direction." If something rises, negative feedback pushes it back down. If something falls, negative feedback pushes it back up.

Notation: Balancing loops are marked with (B) and often appear as circles with a goal.

Balancing Loop: Market Correction

1
2
3
High Prices → Reduced Demand → Lower Sales → Price Cuts → Increased Demand
                ↑                                                ↓
                └────────────────────────────────────────────────┘
This balancing loop keeps prices from rising indefinitely. It's "negative" feedback because rising prices trigger forces that bring them back down.

Why This Matters for Harm

Industries that cause persistent harm typically have:

  • Reinforcing loops that amplify harm (addiction, marketing spending, political influence)
  • Weak or broken balancing loops that should limit harm but don't (captured regulators, information asymmetry, externalized costs)

Understanding these loops reveals why problems persist and where interventions might help.

Diagram: Feedback Loop Types Comparison

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<summary>Reinforcing vs Balancing Loops</summary>
Type: infographic

Purpose: Clearly distinguish between reinforcing and balancing feedback loops with visual examples

Bloom Taxonomy: Understand (L2)

Learning Objective: Students will understand the difference between reinforcing and balancing loops and recognize each in system diagrams

Layout: Side-by-side comparison with animated examples

Left side: REINFORCING LOOPS (R)
- Visual: Spiral expanding outward (or inward for decline)
- Behavior: Amplifies change in same direction
- Keywords: "snowball effect," "vicious cycle," "virtuous cycle"
- Graph below: Exponential growth or decay curve
- Examples:
  * Bank interest compounding
  * Viral spread of information
  * Addiction cycles
  * Market bubbles
- Color: Red/Orange gradient

Right side: BALANCING LOOPS (B)
- Visual: Circle with arrows pointing toward center (goal)
- Behavior: Resists change, seeks equilibrium
- Keywords: "thermostat," "correction," "stabilization"
- Graph below: Oscillation toward steady state
- Examples:
  * Body temperature regulation
  * Supply and demand
  * Population limits
  * Budget constraints
- Color: Blue/Green gradient

Center comparison:
- Table showing key differences
- "Both can be beneficial or harmful depending on context"

Interactive features:
- Click examples to see detailed CLD
- Animation showing loop behavior over time
- Toggle to see real-world industry examples

Implementation: HTML/CSS/JavaScript with SVG animations

Delays in Systems

Delays in systems are time gaps between cause and effect. They're invisible in static diagrams but crucial for understanding why systems behave as they do.

Why Delays Matter

Delays cause several problems:

  • Overshoot: We keep pushing after we've achieved the goal because we don't see results yet
  • Oscillation: Systems swing back and forth as delayed feedback arrives too late
  • Invisibility: Long delays hide cause-and-effect relationships from perception
  • Discounting: People ignore delayed consequences because they feel abstract

Delays in Harmful Industries

Consider the tobacco industry's delays:

Delay Duration Consequence
Addiction onset Days to weeks By the time users realize they're addicted, it's hard to quit
Health effects 10-30 years Cancer appears decades after smoking starts
Epidemiological evidence 20-50 years Population-level patterns take generations to confirm
Regulatory response 5-20 years Policy lags evidence due to political process
Cultural change 10-50 years Social norms around smoking shift slowly

These delays explain why tobacco caused harm for so long before society responded. The system had enormous delays between cause (marketing to youth) and visible effect (lung cancer deaths).

Delay Exploitation

Industries can exploit delays strategically. If harm takes 20 years to appear, companies can profit for decades before consequences materialize. Fossil fuel companies knew about climate change in the 1970s but funded doubt campaigns precisely because the delay bought time.

Stocks and Flows: The Structure of Accumulation

To understand how systems change over time, we need the concepts of stocks and flows.

Stock Variables

Stock variables are accumulations—quantities that build up or deplete over time. They're the "bathtub" in a bathtub analogy: you can see how much water is there at any moment.

Examples of stocks:

  • Population of smokers
  • Atmospheric CO₂ concentration
  • Company cash reserves
  • Public trust in an institution
  • Knowledge about health effects

Stocks change only through flows—they can't teleport from one level to another.

Flow Variables

Flow variables are rates of change—how fast stocks are increasing or decreasing. They're the "faucet" and "drain" in the bathtub analogy.

Examples of flows:

  • Rate of new smokers starting (inflow)
  • Rate of smokers quitting or dying (outflow)
  • Emissions per year (inflow to atmospheric CO₂)
  • Spending rate (outflow from cash reserves)

Accumulation and Depletion

Accumulation occurs when inflows exceed outflows—the stock grows. Depletion occurs when outflows exceed inflows—the stock shrinks.

This seems simple, but people consistently misjudge stock-and-flow dynamics. In studies, most people can't correctly predict how a bathtub's water level changes when inflow and outflow rates vary—even when shown the exact numbers.

The key insight: stocks create inertia. Even if you stop all inflows, depleting a large stock takes time. Even if you start positive flows, building a stock takes time.

Stock-Flow Thinking: Atmospheric Carbon

  • Stock: Total CO₂ in atmosphere (~420 ppm)
  • Inflow: Annual emissions (~40 billion tons)
  • Outflow: Annual absorption by oceans and plants (~20 billion tons)
  • Net: Accumulating ~20 billion tons per year

Even if we cut emissions in half tomorrow, the stock keeps growing (just slower). To stabilize the stock, inflows must equal outflows. To reduce the stock, outflows must exceed inflows for many years.

Diagram: Stocks and Flows MicroSim

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
<summary>Interactive Stock and Flow Simulator</summary>
Type: microsim

Purpose: Help students develop intuition for stock-and-flow dynamics through interactive simulation

Bloom Taxonomy: Apply (L3)

Learning Objective: Students will apply stock-and-flow concepts to predict and control system behavior

Canvas layout (800x500px):
- Top section (800x150): Title and scenario selector
- Center section (800x250): Visual stock representation with flow controls
- Bottom section (800x100): Time series graph of stock level

Visual elements:
- Bathtub/container graphic representing stock
- Water level showing current stock amount
- Left faucet with adjustable flow (inflow)
- Right drain with adjustable flow (outflow)
- Numerical displays showing current values
- Graph tracking stock level over time

Scenarios (selectable):

1. SIMPLE BATHTUB
   - Stock: Water in tub (0-100 gallons)
   - Inflow: Faucet (0-10 gal/min)
   - Outflow: Drain (0-10 gal/min)
   - Goal: Maintain level at 50 gallons

2. SMOKER POPULATION
   - Stock: Number of smokers (0-50 million)
   - Inflow: New smokers per year (0-5 million)
   - Outflow: Quitters + deaths per year (0-5 million)
   - Goal: Reduce to 10 million over 20 years

3. ATMOSPHERIC CARBON
   - Stock: CO₂ concentration (280-600 ppm)
   - Inflow: Emissions (0-50 GT/year)
   - Outflow: Natural absorption (fixed ~20 GT/year)
   - Goal: Stabilize at 450 ppm

4. CORPORATE TRUST
   - Stock: Public trust level (0-100%)
   - Inflow: Positive actions, transparency
   - Outflow: Scandals, broken promises
   - Goal: Maintain above 60%

Interactive controls:
- Sliders for inflow rate and outflow rate
- Play/Pause button for simulation
- Speed control (1x, 2x, 5x)
- Reset button
- Scenario dropdown

Challenges (progressive):
- Level 1: Match inflow and outflow to stabilize
- Level 2: Reach target level within time limit
- Level 3: Deal with random disturbances
- Level 4: Work with delays (outflow responds slowly)

Feedback:
- Real-time stock level display
- "Accumulating" or "Depleting" indicator
- Time to target estimate
- Success/failure message when goals met or missed

Learning messages:
- "Stocks create inertia—change takes time"
- "To stabilize, inflow must equal outflow"
- "Delays make control difficult—don't overcorrect"

Implementation: p5.js with animated fluid graphics

Dynamic Equilibrium and Tipping Points

Systems don't just change—they can exist in stable states, shift between states, or collapse entirely.

Dynamic Equilibrium

Dynamic equilibrium occurs when a system maintains a steady state through active balancing. It's not static—flows are still happening—but stocks remain constant because inflows and outflows match.

Examples:

  • Body temperature (constant despite heat gain and loss)
  • Market prices (stable when supply equals demand)
  • Ecosystem populations (stable when births equal deaths)
  • Social norms (stable when reinforcement equals erosion)

Equilibrium can be:

  • Stable: System returns to equilibrium after disturbance
  • Unstable: Small disturbance pushes system away from equilibrium
  • Metastable: Stable within a range, but large disturbance triggers shift

Tipping Points and Thresholds

A tipping point is a critical value where system behavior suddenly changes. Before the tipping point, the system might gradually change or resist change. After it, change accelerates or becomes irreversible.

Thresholds are the specific values that trigger tipping points.

Examples of tipping points:

  • Arctic ice: Below certain temperatures, ice reflects sunlight and stays cold. Above that threshold, melting creates open water that absorbs heat, accelerating warming.
  • Social norms: Smoking was normal until enough people changed that it became socially unacceptable.
  • Financial systems: Banks survive individual defaults, but above a threshold, failures cascade.
  • Ecosystems: Gradual pollution is absorbed, but past a threshold, the system collapses.

Finding Tipping Points = Finding Leverage

If you can identify where thresholds are, you know where small efforts might trigger large changes. Advocates work to push systems toward tipping points; industries try to prevent reaching them.

Resilience and System Collapse

Resilience is a system's ability to absorb disturbance and maintain function. Resilient systems:

  • Have multiple feedback loops (redundancy)
  • Maintain diversity (options for response)
  • Keep stocks at healthy levels (buffers)
  • Have moderate, not extreme, connectivity

System collapse occurs when disturbance exceeds resilience—the system can't recover and shifts to a degraded state.

Industries can reduce system resilience:

  • Monocultures in agriculture reduce pest resistance
  • Consolidated media reduces diversity of information
  • "Just-in-time" supply chains eliminate buffers
  • Deregulation removes feedback loops

Diagram: Tipping Points and Resilience

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
<summary>System States and Tipping Points</summary>
Type: diagram

Purpose: Visualize how systems exist in stable states and how they can tip into new states

Bloom Taxonomy: Analyze (L4)

Learning Objective: Students will analyze system states and identify conditions that could trigger tipping points

Visual metaphor: Ball-and-landscape diagram

Layout: 3D-style landscape with valleys (stable states) and hills (unstable points)

Elements:
- Landscape with multiple valleys (potential stable states)
- Ball representing current system state
- Arrows showing forces pushing ball
- Threshold lines at hilltops between valleys

States shown:
1. CURRENT STATE (left valley)
   - Label: "Status quo - harmful industry persists"
   - Depth indicates stability (deep = very stable)

2. THRESHOLD (hill between valleys)
   - Label: "Tipping point"
   - Shows critical threshold value

3. NEW STATE (right valley)
   - Label: "Reformed system - harm reduced"
   - Shows alternative stable state

4. COLLAPSE STATE (far right, low valley)
   - Label: "System breakdown"
   - Shows what happens if reform overshoots

Forces shown:
- Advocacy efforts (pushing ball toward threshold)
- Industry resistance (pushing ball back)
- Random disturbances (shaking the landscape)

Annotations:
- "Small push here makes no difference" (in deep valley)
- "Small push here triggers change" (near threshold)
- "Resilience = depth of valley"
- "Once past tipping point, change accelerates"

Interactive features:
- Drag ball to see how far it goes before returning
- Adjust "advocacy pressure" slider
- Adjust "industry resistance" slider
- Watch ball dynamics over time
- Click different scenarios (tobacco, climate, social media)

Color coding:
- Green valleys: Healthy stable states
- Red valleys: Harmful stable states
- Yellow hills: Thresholds

Implementation: HTML/CSS/JavaScript with physics simulation

Causal Loop Diagrams: Your New Best Friends

Now we arrive at the practical tool that puts all these concepts together: causal loop diagrams (CLDs).

What Are CLDs?

CLDs are visual maps of cause-and-effect relationships in a system. They show:

  • Variables: Things that can increase or decrease
  • Arrows: Causal relationships ("A influences B")
  • Polarity signs: Direction of influence (+ or -)
  • Loop labels: Whether loops are reinforcing (R) or balancing (B)
  • Delay marks: Where significant delays exist

A simple CLD might look like:

1
2
3
4
5
6
7
8
9
    Advertising ──(+)──> Smoking Rate
         ↑                    │
         │                    │ (+)
         │                    ↓
    Industry     (R)     Health Costs
    Revenue                   │
         ↑                    │ (-)
         │                    ↓
         └────(+)──── Regulation

This diagram shows a reinforcing loop where advertising increases smoking, which increases revenue, which funds more advertising. But it also shows a balancing loop where smoking creates health costs that eventually trigger regulation.

Reading CLDs: The Basics

Polarity signs indicate direction of influence:

  • (+) means "same direction": If A increases, B increases. If A decreases, B decreases.
  • (-) means "opposite direction": If A increases, B decreases. If A decreases, B increases.

Determining loop type: Trace around a loop and count the negative signs:

  • Even number of negatives (including zero) = Reinforcing loop (R)
  • Odd number of negatives = Balancing loop (B)

The Polarity Trick

Don't think of (+) as "good" or (-) as "bad." Think of (+) as "same direction" and (-) as "opposite direction." This avoids confusion when dealing with harmful variables.

Your First CLD: A Practice Example

Let's build a simple CLD together for the fast fashion industry.

Step 1: Identify key variables

  • Consumer demand for new clothes
  • Production volume
  • Clothing prices
  • Worker wages
  • Environmental pollution
  • Consumer awareness of harm

Step 2: Draw causal relationships

Ask for each pair: "Does A influence B? In what direction?"

  • Consumer demand (+) → Production volume (more demand = more production)
  • Production volume (+) → Environmental pollution (more production = more pollution)
  • Production volume (-) → Clothing prices (more production = economies of scale = lower prices)
  • Clothing prices (-) → Consumer demand (lower prices = more demand)
  • Environmental pollution (+) → Consumer awareness (more pollution = eventually more awareness)
  • Consumer awareness (-) → Consumer demand (more awareness = less demand for fast fashion)

Step 3: Identify loops

  • Loop 1: Demand → Production → Lower Prices → More Demand (R)
  • Count negatives: 2 (even) → Reinforcing loop driving growth
  • Loop 2: Demand → Production → Pollution → Awareness → Less Demand (B)
  • Count negatives: 1 (odd) → Balancing loop limiting growth

Step 4: Add delays

  • Delay between pollution and awareness (years to decades)
  • Delay between awareness and behavior change

Step 5: Draw it out

Diagram: Fast Fashion CLD

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
<summary>Fast Fashion Industry Causal Loop Diagram</summary>
Type: causal-loop-diagram

Purpose: Demonstrate CLD construction with a real industry example

Bloom Taxonomy: Apply (L3)

Learning Objective: Students will apply CLD notation to map the fast fashion industry's feedback dynamics

Variables (nodes):
1. Consumer Demand
2. Production Volume
3. Clothing Prices
4. Worker Wages
5. Environmental Pollution
6. Consumer Awareness
7. Industry Profits
8. Marketing Spending

Causal links (arrows with polarity):
- Consumer Demand → (+) → Production Volume
- Production Volume → (+) → Industry Profits
- Production Volume → (-) → Clothing Prices
- Production Volume → (-) → Worker Wages (cost pressure)
- Production Volume → (+) → Environmental Pollution
- Clothing Prices → (-) → Consumer Demand
- Industry Profits → (+) → Marketing Spending
- Marketing Spending → (+) → Consumer Demand
- Environmental Pollution → (+) → Consumer Awareness (DELAY)
- Consumer Awareness → (-) → Consumer Demand

Loops to label:
- R1: "Growth Engine" (Demand → Production → Profits → Marketing → Demand)
- R2: "Price Competition" (Demand → Production → Lower Prices → More Demand)
- B1: "Awareness Brake" (Demand → Production → Pollution → Awareness → Less Demand)

Visual layout:
- Central cluster of core business variables
- Environmental/social variables on periphery
- Clear loop paths with R/B labels
- Delay marks (||) on slow connections

Color coding:
- Business variables: Blue
- Environmental variables: Green
- Social variables: Orange
- Arrows in reinforcing loops: Red
- Arrows in balancing loops: Blue

Interactive features:
- Hover over variable to highlight all connections
- Click loop label to animate the loop
- Toggle to show "with intervention" version
- Click to add new variables/connections

Implementation: D3.js or custom SVG with JavaScript

Building Your Own CLDs

Here's a step-by-step process:

  1. Define the problem you're trying to understand
  2. List key variables (aim for 5-15 to start)
  3. Draw initial connections asking "Does X influence Y?"
  4. Assign polarities (+/-) to each arrow
  5. Identify loops and label them R or B
  6. Mark delays where they're significant
  7. Test your logic by tracing through scenarios
  8. Revise and refine based on feedback

Common CLD Mistakes

  • Too many variables: Start simple, add complexity gradually
  • Vague variables: Use specific, measurable quantities
  • Missing loops: Every variable should connect to at least one loop
  • Wrong polarity: Double-check by asking "If A increases, what happens to B?"
  • Forgetting delays: Mark them—they explain system behavior

Diagram: CLD Builder MicroSim

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
<summary>Interactive CLD Builder</summary>
Type: microsim

Purpose: Provide hands-on practice building causal loop diagrams

Bloom Taxonomy: Create (L6)

Learning Objective: Students will create causal loop diagrams for industry systems, correctly identifying loop types and polarities

Canvas layout (900x600px):
- Left panel (200px): Variable palette and tools
- Center canvas (500px): Drawing area
- Right panel (200px): Loop analysis and feedback

Left panel elements:
- "Add Variable" button
- Variable name input field
- Category selector (Business, Environment, Social, Government)
- Premade variable chips for common factors
- Tool selector (Select, Connect, Delete)

Center canvas:
- Draggable variable nodes
- Click-and-drag to create connections
- Click connection to set polarity (+/-)
- Click connection to add delay marker
- Zoom and pan controls
- Grid background for alignment

Right panel elements:
- List of detected loops
- Loop type indicator (R/B) with explanation
- "Check My Work" button
- Feedback messages
- "Scenario Test" button
- Save/Load diagram options

Guided exercises:

1. TUTORIAL: Thermostat
   - Pre-placed: Room Temperature, Heater, Target Temperature
   - Guide student to complete the balancing loop

2. PRACTICE: Addiction Cycle
   - Pre-placed: Substance Use, Tolerance
   - Student adds: Withdrawal, Craving
   - Should identify reinforcing loop

3. CHALLENGE: Social Media Harm
   - Variables provided as chips
   - Student builds complete diagram
   - Multiple valid solutions accepted

4. FREE BUILD: Your Industry
   - Blank canvas
   - Student chooses industry to model
   - Peer review option

Feedback system:
- Green checkmark for correct polarity
- Yellow warning for potential issues
- Red X for definite errors
- Explanations for each feedback item
- "This loop is reinforcing because..."

Scoring (for exercises):
- Points for correct connections
- Points for correct polarities
- Points for loop identification
- Bonus for noting delays

Implementation: p5.js with graph data structure, or vis-network

Mental Models: The Maps in Our Heads

Behind every CLD is a mental model—an internal representation of how we think the world works. Mental models shape what we notice, how we interpret it, and what solutions we imagine.

Why Mental Models Matter

Everyone has mental models, but they're often:

  • Incomplete: Missing important variables or connections
  • Outdated: Based on past experience that no longer applies
  • Biased: Shaped by interests, ideology, or limited perspective
  • Invisible: We're often unaware of our own assumptions

CLDs make mental models visible. When you draw a CLD, you're externalizing your mental model so it can be examined, questioned, and improved.

Conflicting Mental Models

Different stakeholders often have different mental models of the same system:

Stakeholder Mental Model of Obesity
Individual "I lack willpower" → Personal failure
Food industry "People choose freely" → No industry responsibility
Public health "Environment shapes behavior" → Policy intervention needed
Healthcare "It's a disease" → Medical treatment required
Economics "Market incentives are misaligned" → Change incentives

None of these is completely right or wrong—each captures part of the truth. But they lead to very different interventions. CLDs can help integrate these perspectives into a more complete picture.

Improving Mental Models

To develop better mental models:

  • Draw them out: Externalize with CLDs so you can examine them
  • Seek disconfirming evidence: What would prove your model wrong?
  • Incorporate multiple perspectives: Whose model differs from yours?
  • Test against data: Does your model predict what actually happens?
  • Update continuously: Revise as you learn more

System Dynamics Models: CLDs Come Alive

System dynamics models take CLDs further by adding numbers. They transform qualitative loops into quantitative simulations that can be run over time.

From Diagram to Model

A system dynamics model specifies:

  • Initial stock values: Starting quantities
  • Flow equations: How flows depend on stocks and other variables
  • Parameter values: Constants and coefficients
  • Time horizon: How long to simulate

With these specifications, you can run "what if" scenarios: What happens if we increase the tobacco tax by 10%? What if we ban advertising? What if climate regulations are delayed by 20 years?

Tools for System Dynamics

Popular tools include:

  • Vensim: Industry standard, free version available
  • Stella: User-friendly, widely used in education
  • AnyLogic: Multi-method modeling platform
  • Python (PySD): Open-source alternative
  • InsightMaker: Web-based, free, good for learning

System dynamics modeling is beyond the scope of this course, but understanding that CLDs can become quantitative simulations is valuable. It shows that systems thinking isn't just qualitative philosophy—it can make precise, testable predictions.

Finding Leverage Points: The Payoff

Everything in this chapter leads here: using systems understanding to find leverage points—places where small changes can produce large effects.

Why Leverage Points Matter

Most interventions fail because they push against the system's grain. They target symptoms rather than causes, or they strengthen balancing loops that resist change.

But systems have places where they're more susceptible to change. Find those places, and you can be clever instead of just working harder.

Donella Meadows, a pioneer of systems thinking, identified a hierarchy of leverage points (we'll explore this in depth in Chapter 7). For now, the key insight is:

The deeper you intervene in a system, the more leverage you have—but the harder the intervention is to achieve.

Leverage Points in Industry Harm

Looking at our CLDs, potential leverage points include:

Low leverage (but easier):

  • Adjusting numbers (prices, taxes, limits)
  • Slowing flows (consumption rates, emission rates)
  • Changing stock levels (regulations, reserves)

Medium leverage:

  • Adding or changing feedback loops (new regulations, transparency requirements)
  • Changing information flows (disclosure rules, public awareness)
  • Modifying delays (faster testing, quicker regulatory response)

High leverage (but harder):

  • Changing who has power (industry influence vs. public interest)
  • Changing goals (from profit maximization to stakeholder value)
  • Changing paradigms (how we think about the industry's role)

Start with the Diagram

Before designing interventions, map the system with a CLD. The diagram often reveals leverage points that weren't obvious. Look for:

  • Key reinforcing loops: Can you weaken harmful ones or strengthen beneficial ones?
  • Broken balancing loops: Can you repair or strengthen them?
  • Long delays: Can you shorten them or create early warning signals?
  • Missing feedback: Can you create feedback that's currently absent?

Case Study: Applying Systems Thinking to Tobacco

Let's see how systems thinking illuminates the tobacco industry and suggests leverage points.

The Core Loops

Reinforcing loops driving harm:

  • R1: Addiction loop: Nicotine use → tolerance → increased use
  • R2: Profit-marketing loop: Sales → profits → marketing → more sales
  • R3: Political influence loop: Profits → lobbying → weak regulation → continued profits

Balancing loops (often weakened):

  • B1: Health feedback loop: Smoking → illness → reduced smoking (delay: decades)
  • B2: Regulatory loop: Harm evidence → regulation → reduced harm (delay: years, weakened by lobbying)
  • B3: Social norm loop: Visible illness → social disapproval → reduced uptake (delay: generations)

Where Interventions Worked

Successful tobacco control targeted several leverage points:

Intervention System Effect Leverage Level
Tobacco taxes Increased prices, slowed sales flow Low
Warning labels Created information feedback Medium
Advertising bans Broke profit-marketing loop Medium-High
Smoking bans Changed social norms, created feedback Medium-High
Litigation Revealed hidden information, changed power High
Denormalization Shifted cultural paradigm Very High

The most effective approach combined multiple interventions hitting different leverage points simultaneously.

Lessons for Other Industries

This analysis suggests a template for tackling other harmful industries:

  1. Map the system with CLDs
  2. Identify reinforcing loops driving harm
  3. Find weakened balancing loops that should limit harm
  4. Look for delays that hide consequences
  5. Design interventions at multiple leverage points
  6. Anticipate system response (industries will push back)
  7. Build coalitions across different stakeholder mental models

Key Takeaways

Let's consolidate the wisdom of this chapter:

  1. Think in systems: Linear cause-and-effect thinking misses crucial feedback dynamics. Problems persist because of loops, not just chains.

  2. Draw it out: CLDs externalize mental models, making them visible, testable, and shareable. Your first CLD will be messy—that's fine. Keep practicing.

  3. Understand loop types: Reinforcing loops amplify change; balancing loops resist it. Harmful industries have strong reinforcing loops and weakened balancing loops.

  4. Respect delays: Time gaps between cause and effect explain why problems persist and why prediction is hard. Mark delays in your diagrams.

  5. Recognize stocks: Accumulations create inertia. Even good policies take time because you must change flows to change stocks.

  6. Find leverage points: Small changes in the right places matter more than large changes in the wrong places. CLDs help you find those places.

  7. Embrace wicked problems: Systems thinking doesn't make complex problems simple—but it makes them tractable. CLDs are tools for tackling problems that seem impossible.

Chapter Summary

Systems thinking transforms how we understand and address industry harm. Instead of looking for single causes and simple solutions, we see interconnected loops that amplify harm or resist change. Causal loop diagrams make these invisible structures visible, revealing leverage points where clever interventions can achieve more than brute force.

Yes, CLDs are tricky at first. Your early attempts will look like spaghetti. But with practice, these diagrams become trusted allies in tackling wicked problems. They reveal why problems persist despite good intentions and show where to push for maximum impact.

The key insight: you don't need to overpower complex systems—you need to understand them. Understanding reveals leverage points. Leverage points reveal opportunities. And opportunities, well-chosen and well-executed, can transform harmful industries into forces for good.

In the next chapter, we'll explore system archetypes—recurring patterns that appear across many different systems. Once you learn to recognize these patterns, you'll see them everywhere—and you'll know the typical leverage points for each.

Reflection Questions

1. Think of a problem you've tried to solve that keeps coming back. Can you identify any feedback loops that might explain its persistence?

Consider personal, organizational, or social problems. What reinforcing loops amplify the problem? What balancing loops should limit it but don't? Where are the delays?

2. Why might different stakeholders in the same system have very different mental models of how it works?

Consider how position, interests, training, and access to information shape understanding. How would a tobacco executive's mental model differ from a lung cancer patient's?

3. If you could strengthen one balancing loop in a harmful industry you care about, which would you choose and why?

Think about missing feedback, broken regulation, hidden information. What feedback should exist but doesn't?

4. What makes some leverage points more powerful but harder to achieve than others?

Consider the difference between changing parameters versus changing goals versus changing paradigms. Why do deeper interventions face more resistance?

Learning Outcomes

By the end of this chapter, you should be able to:

  • Explain the difference between linear and systems thinking
  • Identify system components, boundaries, and interconnections
  • Distinguish between reinforcing and balancing feedback loops
  • Recognize the role of delays in system behavior
  • Apply stock-and-flow concepts to understand accumulation
  • Create basic causal loop diagrams for industry systems
  • Identify potential leverage points from system diagrams
  • Recognize how mental models shape understanding and action

Next Steps

In the next chapter, we'll explore system archetypes and root cause analysis. Archetypes are recurring patterns—like "Tragedy of the Commons" or "Shifting the Burden"—that appear across many different systems. Once you learn these patterns, you'll recognize them in industry after industry, and you'll know the typical leverage points for each.

Your CLDs are about to get even more powerful.


Concepts Covered in This Chapter

This chapter covers the following 26 concepts from the learning graph:

  1. Systems Thinking
  2. Complex Systems
  3. System Boundaries
  4. System Components
  5. Interconnections
  6. Emergence
  7. Nonlinear Dynamics
  8. Feedback Loops
  9. Positive Feedback
  10. Negative Feedback
  11. Reinforcing Loops
  12. Balancing Loops
  13. Delays in Systems
  14. Stocks and Flows
  15. Stock Variables
  16. Flow Variables
  17. Accumulation
  18. Depletion
  19. Dynamic Equilibrium
  20. Tipping Points
  21. Thresholds
  22. Resilience
  23. System Collapse
  24. Causal Loop Diagrams
  25. System Dynamics Models
  26. Mental Models

Prerequisites

This chapter builds on concepts from: