Skip to content

Data-Driven Ethics and Systems Change FAQ

Frequently asked questions about the Data-Driven Ethics and Systems Change course, organized by topic to help students navigate the material and deepen their understanding.


Getting Started

What is this course about?

This course transforms traditional ethics education by integrating data science, systems thinking, and advocacy strategies to address the most harmful industries and behaviors in modern society. Rather than focusing on abstract philosophical debates, you'll learn to measure harm, analyze systemic causes, identify leverage points, and design evidence-based interventions. The course examines industries like tobacco (7-8 million deaths annually), fossil fuels (8 million deaths from air pollution), and ultra-processed foods (11 million diet-related deaths) to understand how concentrated harm can be systematically reduced.

Who is this course designed for?

The course is designed for college students with some background in data science who want to apply quantitative methods to ethical challenges. Prerequisites include STATS 201 or DATA 101 (or instructor permission). You should have basic skills in statistics and data visualization, critical thinking and analytical writing, comfort with ambiguity and complex systems, and a commitment to evidence-based problem solving. The material is accessible to anyone with foundational data literacy who wants to create measurable positive change.

What will I learn by the end of this course?

By completing this course, you'll be able to: identify and quantify harm using frameworks like DALYs, social cost accounting, and life-cycle analysis; gather unbiased data from government, academic, and NGO sources; analyze root causes using systems thinking tools including causal loop diagrams and system archetypes; identify high-leverage intervention points using Donella Meadows' framework; design advocacy campaigns incorporating behavioral economics and policy design principles; and produce an evidence-based capstone project proposing systemic reform for a chosen industry.

How does data-driven ethics differ from traditional ethics education?

Traditional moral philosophy asks "What should we do?" and relies on philosophical arguments about principles. Data-driven ethics adds an empirical dimension by asking "What works to reduce harm?" It combines philosophical reasoning with quantitative measurement to enable evidence-based prioritization. While traditional ethics debates individual cases, data-driven ethics analyzes systemic patterns. While traditional ethics produces theoretical knowledge, data-driven ethics generates actionable insights backed by data. Both approaches are valuable—data-driven ethics doesn't replace moral philosophy but gives it better information to work with.

What are the prerequisites for this course?

The formal prerequisites are STATS 201 or DATA 101 (or instructor permission). More practically, you need: basic statistics knowledge (means, distributions, correlations, confidence intervals), familiarity with data visualization, critical thinking and analytical writing skills, and comfort working with ambiguous, complex problems. If you can read a research paper, interpret a chart, and construct a logical argument, you have the foundation needed for this course.

How is the course structured?

The course runs 14 weeks with the final 4 weeks dedicated to the capstone project. Content is organized into three major parts: Part I covers data-driven ethics methodology and measuring harm; Part II examines systems thinking and root cause analysis; Part III focuses on leverage points and advocacy for change. Assessment includes industry harm scorecards (20%), systems mapping assignments (20%), leverage point analysis (20%), advocacy strategy design (15%), and capstone project (25%).

What career paths does this course prepare me for?

This course prepares you for careers in corporate social responsibility and sustainability, public policy and regulatory analysis, non-profit advocacy and social impact work, data journalism and investigative reporting, and management consulting and strategic planning. The combination of quantitative analysis skills with ethical reasoning and advocacy strategies is increasingly valued across sectors as organizations face pressure to address their social and environmental impacts.


Core Concepts

What is a DALY and why does it matter?

A DALY (Disability-Adjusted Life Year) measures the total burden of disease by combining years of life lost to premature death (YLL) with years lived with disability (YLD). One DALY represents one lost year of healthy life. The formula is simple: DALY = YLL + YLD. DALYs revolutionized global health by allowing comparison across very different problems—is malaria worse than road traffic injuries? Is tobacco worse than alcohol? DALYs provide a common currency for comparison, enabling smarter resource allocation by identifying which interventions save the most healthy life-years per dollar spent.

What is the difference between mortality rate and morbidity rate?

Mortality rate measures deaths in a population over a specific time period—it's the bedrock of harm measurement because death is clear, countable, and unambiguous. Morbidity rate measures the incidence or prevalence of disease and disability. Incidence refers to new cases arising during a time period; prevalence refers to total cases existing at a point in time. Both metrics are essential because many harms don't kill quickly—they reduce quality of life for years or decades. Chronic conditions like diabetes, respiratory disease, and mental health disorders represent enormous suffering that mortality statistics alone would miss.

What is social cost accounting?

Social cost accounting expands traditional financial accounting to include all costs that industries impose on society—whether or not those costs show up in market prices. Standard accounting tracks revenues, costs, and profits from the company's perspective but ignores costs imposed on others. Social cost accounting includes: direct costs (out-of-pocket expenses borne by affected parties), indirect costs (lost productivity, reduced quality of life), healthcare system costs, environmental remediation, regulatory costs, and opportunity costs. For example, the tobacco industry generates roughly $35 billion in annual profits but causes over $330 billion in social costs—a 10:1 ratio of harm to profit.

What are economic externalities?

Economic externalities are costs (or benefits) that affect parties who didn't choose to incur them. Negative externalities are costs imposed on third parties without their consent—air pollution affecting downwind communities, antibiotic resistance spreading due to agricultural overuse, or climate change from carbon emissions. Positive externalities are benefits received by third parties who didn't pay for them—vaccination protecting the unvaccinated through herd immunity, or research creating knowledge others can use. Externalities represent market failures: when producers don't pay full costs, they produce too much of the harmful good because the price is artificially low.

What is life-cycle analysis?

Life-cycle analysis (LCA), also called cradle-to-grave analysis, examines the environmental impacts of a product throughout its entire existence—from raw material extraction through disposal. A complete LCA examines: raw material extraction (mining, drilling, harvesting), material processing (refining, chemical treatment), manufacturing (assembly, production), transportation (shipping between stages), use phase (consumer use, energy consumption), and end of life (disposal, recycling, decomposition). LCA often reveals harm hidden at stages consumers never see—a cotton t-shirt requires 2,700 liters of water to grow, involves toxic dyeing chemicals, and takes over 200 years to decompose in landfills.

What is systems thinking and why is it important for ethics?

Systems thinking is a holistic approach that analyzes interconnections and patterns within complex wholes rather than reducing problems to isolated components. It's essential for ethics because harmful industries don't exist in isolation—they're embedded in feedback loops, power structures, and cultural systems that keep them entrenched. Systems thinking helps us understand why problems persist despite good intentions, identify root causes rather than just symptoms, find leverage points where small changes can produce large effects, and anticipate unintended consequences of interventions. Without systems thinking, we treat symptoms while underlying problems continue.

What are feedback loops and why do they matter?

Feedback loops are circular causal chains where effects eventually influence their original causes. Reinforcing loops (positive feedback) amplify change—success breeds success, or problems compound. Balancing loops (negative feedback) counteract change and push systems toward stability. Feedback loops explain why some problems spiral out of control (reinforcing) while others resist change (balancing). Understanding these dynamics is essential for effective intervention: you might need to interrupt a harmful reinforcing loop or strengthen a beneficial balancing loop. Many interventions fail because they don't account for the feedback dynamics that will amplify or dampen their effects.

What is a causal loop diagram?

A causal loop diagram (CLD) is a visual representation showing circular cause-and-effect relationships, feedback loops, delays, and polarity within systems. CLDs use arrows to show causal connections, with + signs indicating same-direction relationships (more A leads to more B) and - signs indicating opposite-direction relationships (more A leads to less B). Loops are labeled R for reinforcing and B for balancing. CLDs are fundamental tools for understanding why problems persist and identifying where to break harmful cycles. For example, a tobacco addiction loop might show: Nicotine use → Withdrawal symptoms → Craving → Nicotine use.

What are system archetypes?

System archetypes are recurring patterns of behavior in systems that produce predictable problems. Learning archetype names is like learning the vocabulary of a new language—once you can recognize "Tragedy of the Commons" or "Shifting the Burden," you can diagnose complex situations quickly. Key archetypes include: Tragedy of the Commons (individual rational behavior depletes shared resources), Shifting the Burden (quick fixes erode capability for fundamental solutions), Success to the Successful (initial advantages compound over time), Fixes that Fail (solutions create unintended consequences that worsen the original problem), and Limits to Growth (growth processes encounter constraints). Recognizing archetypes helps you see past surface symptoms to underlying structural dynamics.

What is the Tragedy of the Commons?

The Tragedy of the Commons describes situations where individual rational behavior leads to collective irrational outcomes when people share a common resource. Each user has an incentive to use more (capturing individual benefit) while sharing the cost of depletion with everyone else. Without coordination or regulation, the resource is overexploited and eventually collapses. Examples include overfishing (each fleet maximizes catch; global stocks collapse), groundwater depletion (each farm pumps more; aquifers exhaust), antibiotic overuse (each doctor prescribes liberally; resistance develops), and carbon emissions (each nation emits freely; climate changes for all). Solutions require either regulation, privatization, or community-based management.

What is "Shifting the Burden" as a system archetype?

Shifting the Burden describes using quick fixes to address symptoms while the underlying problem-solving capability erodes. The quick fix works—temporarily—but that success masks slow erosion of fundamental solutions, creating dangerous dependency. Examples include: using pesticides (quick fix) instead of ecosystem-based pest management (fundamental solution), leading to loss of natural predator populations; or prescribing opioids (quick fix) instead of physical therapy and lifestyle changes (fundamental solution), leading to lost patient coping skills. The most dangerous aspect is that it feels like success—the symptom goes away—but hidden costs emerge later when capability for real solutions has severely degraded.

What are leverage points?

Leverage points are places within complex systems where small changes can produce large effects. Donella Meadows identified twelve leverage points ranked from least to most effective. Lower leverage points include constants, numbers, and subsidies (easy to change but limited impact). Medium leverage points include information flows and rules (changing how the game is played). High leverage points include system goals and paradigms (changing who plays and why). The highest leverage is transcending paradigms—recognizing that all paradigms are mental constructs, not absolute reality. Finding leverage points is essential for efficient intervention—working smarter rather than using brute force against entrenched systems.

Who is Donella Meadows and why is her work important?

Donella Meadows was an environmental scientist and systems thinker who developed the influential framework of twelve leverage points for system intervention. Her 1999 article "Leverage Points: Places to Intervene in a System" provides the theoretical foundation for strategic thinking about where to intervene in complex systems. Meadows was also a lead author of "The Limits to Growth" (1972), an early systems dynamics study of global sustainability. Her work helps us understand why some interventions transform systems while others barely make a dent—and how to design strategies that target high-leverage points rather than wasting resources on low-impact changes.

What is the difference between high and low leverage points?

Low leverage points are intervention points where changes have limited system-wide effects—like adjusting tax rates by small percentages or tweaking regulations. They're easy to implement but don't address structural dynamics. High leverage points are places where small changes can produce large, transformative effects—like changing system goals, information flows, or the rules of the game. The highest leverage involves changing paradigms (the mental models that shape how people see the system) or even transcending paradigms (recognizing that all frameworks are constructs). Strategic change-makers learn to use the whole "ladder" of leverage points, targeting deeper interventions when possible while using shallower ones to build momentum.

What is behavioral economics and how does it apply to ethics?

Behavioral economics studies psychological, cognitive, and emotional factors that influence economic decisions, revealing systematic deviations from rational choice. It's essential for designing interventions that work with human nature rather than against it. Key concepts include: loss aversion (people are more motivated to avoid losses than gain equivalent amounts), present bias (overvaluing immediate rewards versus future benefits), cognitive load (decision quality deteriorates when overwhelmed), and social norms (behavior is heavily influenced by what others do). Understanding these patterns helps design ethical "nudges" that make good choices easier without restricting freedom—like making sustainable options the default or providing real-time feedback on energy use.

What is nudge theory?

Nudge theory proposes that indirect suggestions and default options can influence behavior without restricting choices. A nudge is any aspect of choice architecture that alters behavior predictably without forbidding options or significantly changing economic incentives. Examples include: automatic enrollment in retirement savings (opt-out rather than opt-in increases participation dramatically), placing healthy foods at eye level while keeping unhealthy options less accessible, or using social comparison on utility bills ("You used more energy than your neighbors"). Nudges offer ethical ways to promote better decisions while preserving autonomy—they don't force choices but make good choices easier.

What is choice architecture?

Choice architecture is the deliberate design of environments in which people make decisions to influence choices without restricting options. Every choice is presented in some context—the order of options, which is the default, how information is framed—and these contextual factors powerfully influence decisions. Choice architects can use this power for good (making healthy, sustainable, ethical choices easier) or for ill (manipulating people toward harmful choices for profit). In data-driven ethics, we study choice architecture both to recognize when it's being used against public interest and to design interventions that help people act on their values.


Technical Details

How do you calculate DALYs?

DALYs are calculated as the sum of Years of Life Lost (YLL) and Years Lived with Disability (YLD). YLL = (Life Expectancy - Age at Death) summed across all deaths. YLD = Prevalence × Duration × Disability Weight, where disability weight is a number between 0 (perfect health) and 1 (death equivalent). For example, if someone develops a chronic condition at age 40 with a disability weight of 0.4 and dies at age 60 when life expectancy is 80: YLL = 80 - 60 = 20 years; YLD = 20 years × 0.4 = 8 years; Total DALYs = 28. Higher DALYs indicate greater harm.

What are disability weights and how are they determined?

Disability weights are numerical values between 0 and 1 representing the severity of health states relative to full health. They're determined through surveys asking people to compare conditions—essentially asking "How much worse is living with condition X than perfect health?" Examples from the Global Burden of Disease study: mild hearing loss = 0.010, moderate low back pain = 0.054, moderate depression = 0.396, complete paralysis below neck = 0.589, severe dementia = 0.778. These weights are controversial—who decides how bad blindness is?—but they provide a structured way to compare very different health states in DALY calculations.

What is the difference between DALYs and QALYs?

DALYs (Disability-Adjusted Life Years) and QALYs (Quality-Adjusted Life Years) are mirror images. DALYs measure disease burden—higher numbers mean more harm. QALYs measure health—higher numbers mean more quality life. The formulas are: DALY = Years of Life Lost + Years Lived with Disability; QALY = Years of Life × Quality Weight (0 to 1). DALYs are commonly used in global health to compare harm across diseases and industries. QALYs are commonly used in health economics to evaluate whether treatments are cost-effective. Both embed value judgments in their weights and can be manipulated to serve particular interests.

How do you normalize harm metrics for fair comparison?

Normalization adjusts raw numbers to common scales enabling fair comparison. Key normalizations include: Harm per revenue (Total DALYs ÷ Revenue in Billions)—answers "How much damage per dollar generated?" enabling comparison of different-sized industries. Per capita impact (Total DALYs ÷ Affected Population)—answers "How harmful per person affected?" helping identify concentrated harm. Harm per employee or per unit produced enables labor safety and product comparisons. Without normalization, a large industry might appear more harmful simply because it's larger, not because it's more harmful per unit of activity.

What is carbon footprint and how is it measured?

Carbon footprint measures total greenhouse gas emissions associated with a product, activity, or entity, expressed in CO₂ equivalents (CO₂e). It's calculated by inventorying all emission sources across the life cycle—raw materials, manufacturing, transportation, use, disposal—and converting different greenhouse gases to CO₂ equivalents based on their global warming potential. Limitations include: varying calculation methods make comparisons difficult, scope definitions significantly affect results, and carbon footprint ignores other impacts (water, land, biodiversity). Despite limitations, it's become the dominant metric for discussing climate impact.

What is water footprint?

Water footprint measures total freshwater consumption, including: blue water (surface and groundwater consumed), green water (rainwater stored in soil and consumed by plants), and gray water (water needed to dilute pollutants to acceptable levels). Water footprints reveal hidden consumption—a kilogram of beef requires approximately 15,000 liters of water; a kilogram of vegetables only 300 liters. This metric is increasingly important as freshwater scarcity intensifies globally and helps compare the environmental impact of different products and diets.

What sources of data are most credible for ethical analysis?

Data credibility varies by source type. Most credible: meta-analyses and systematic reviews (multiple studies synthesized using rigorous methods), peer-reviewed academic research (expert-evaluated before publication). Highly credible: government agencies and international bodies (WHO, UN, EPA—comprehensive but may lag or face political interference), university research centers. Moderate credibility: major news outlets, NGO reports, think tanks (valuable perspectives but may have advocacy agendas). Requires most scrutiny: industry self-reporting (obvious incentives for favorable presentation), social media, blogs. Always apply source triangulation—using multiple independent sources to verify claims increases confidence.

How do you perform source triangulation?

Source triangulation means using multiple independent sources to verify claims. If three different sources using different methods reach similar conclusions, you can be more confident than relying on a single source. Steps: (1) Identify the claim to verify; (2) Find sources from different categories (academic, government, NGO, journalism); (3) Check that sources used different methodologies; (4) Look for areas of agreement and disagreement; (5) Weight conclusions based on convergence. For tobacco harm, you might consult peer-reviewed epidemiological studies, CDC mortality statistics, WHO global estimates, and internal company documents revealed through litigation. When all point to similar conclusions, the case becomes compelling.


Common Challenges

How do I avoid confirmation bias in ethical analysis?

Confirmation bias—the tendency to seek, interpret, and remember information that confirms existing beliefs—is a major threat to objective ethical analysis. Strategies to combat it: (1) Actively seek disconfirming evidence—look for studies that challenge your hypothesis; (2) Use structured methods that reduce bias, like systematic literature reviews; (3) Invite diverse perspectives to challenge your thinking; (4) Acknowledge your biases openly in your analysis; (5) Consider what would change your mind before you begin; (6) Practice intellectual humility. Remember that industries with something to hide often exploit confirmation bias by funding research designed to support their preferred conclusions.

How do I identify manipulation in harm statistics?

Industries routinely manipulate harm measurement. Watch for: Narrow definitions (counting only direct deaths, excluding indirect harms); Cherry-picked baselines (comparing against unusual years); Misleading comparisons (comparing against worst alternatives rather than better ones); Ignoring uncertainty (presenting point estimates without ranges); High discounting (minimizing future harms); Burden shifting (externalizing costs to powerless communities). Always ask: Who funded this study? How is harm defined? What's included/excluded? What time period is considered? Who bears costs versus who receives benefits? Are uncertainty ranges provided?

What do I do when data sources conflict?

Conflicting data is common and not necessarily a problem—it may reflect legitimate uncertainty or different methodologies. Steps: (1) Examine methodologies—different approaches may be measuring slightly different things; (2) Check funding sources for conflicts of interest; (3) Look at sample sizes and statistical power; (4) Consider time periods and geographic scope; (5) Assess whether differences are substantively meaningful or within uncertainty ranges; (6) Weight sources by credibility and rigor; (7) Report the range of estimates rather than picking one. Transparency about uncertainty is more valuable than false precision.

How do I deal with missing data?

Missing data is unavoidable in harm analysis—some outcomes aren't tracked, some populations aren't studied, some industries don't report. Strategies: (1) Document what's missing and why—missing data often reflects power imbalances (marginalized communities are understudied); (2) Use imputation methods cautiously, with transparency about assumptions; (3) Report ranges that account for uncertainty from missing data; (4) Look for proxy measures that might capture similar outcomes; (5) Consider qualitative evidence when quantitative data is unavailable; (6) Be explicit that your analysis may underestimate harm in areas with poor data coverage.

Why do some interventions fail despite good intentions?

Interventions often fail because they don't account for system dynamics. Common failure modes include: Treating symptoms rather than root causes (the problem returns, often worse); Triggering balancing loops that counteract the intervention; Creating unintended consequences that undermine the goal (the "Fixes that Fail" archetype); Ignoring time delays that cause oscillation or overshoot; Underestimating resistance from vested interests; Failing to achieve sufficient scale or duration; Poor timing relative to system state. Systems thinking helps anticipate these failure modes and design more robust interventions that work with system dynamics rather than against them.

How do I balance quantitative rigor with empathy for affected communities?

Numbers can create false precision and dehumanize suffering—"8 million deaths" is easier to ignore than one grieving family. Strategies for balance: (1) Include affected voices—let communities define what harms matter to them; (2) Disaggregate data to show who specifically bears the burden, not just averages; (3) Contextualize statistics with human stories; (4) Acknowledge limitations and what numbers miss; (5) Consider power—ask whose interests are served by particular measurement choices; (6) Remember that behind every DALY is a person who suffered. The goal is rigorous measurement that informs action, not statistics that distance us from human reality.


Best Practices

What makes a good harm scorecard?

A good harm scorecard captures multi-dimensional harm without collapsing everything into a misleading single number. Key elements: (1) Multiple dimensions—environmental impact, human health impact, social justice impact, economic externalities; (2) Standardized scales (0-100) enabling comparison; (3) Transparent methodology explaining how scores are assigned; (4) Clear data sources for each dimension; (5) Acknowledgment of uncertainty and limitations; (6) Appropriate normalization (per revenue, per capita) for fair comparison; (7) Weighting transparency—if dimensions are combined, weights should be explicit and justifiable. The best scorecards enable comparison while respecting that different dimensions may matter differently to different stakeholders.

How do I identify root causes effectively?

Root cause analysis requires going deeper than surface symptoms. Techniques: (1) The Five Whys—repeatedly ask "why?" to trace symptoms back to fundamental causes (Why child labor? Low wages. Why low wages? Low cocoa prices. Why low prices? Market power concentration); (2) The Iceberg Model—analyze problems at four levels: events (visible symptoms), patterns (trends over time), structures (systems creating patterns), and mental models (beliefs creating structures); (3) Causal loop diagrams—map the feedback dynamics perpetuating the problem; (4) Stakeholder analysis—identify who benefits from the status quo. Root cause analysis prevents wasting effort on superficial fixes that don't address underlying dynamics.

What makes an effective leverage point analysis?

Effective leverage point analysis: (1) Maps the system thoroughly first—you can't find leverage without understanding dynamics; (2) Identifies multiple potential intervention points across Meadows' hierarchy; (3) Assesses each point for accessibility (can we actually reach it?) and impact (how much change would it produce?); (4) Considers timing—some points are only accessible during windows of opportunity; (5) Evaluates resistance—high-leverage points often face strong opposition from vested interests; (6) Plans for unintended consequences; (7) Sequences interventions strategically—sometimes lower-leverage points build momentum for higher-leverage change.

How do I design advocacy strategies that actually work?

Effective advocacy strategies: (1) Speak the language of your audience—economists respond to cost-benefit analysis, policymakers to feasibility, activists to urgency; (2) Use data to counter industry talking points and reveal true social costs; (3) Build coalitions across constituencies—strange bedfellows can be powerful; (4) Time interventions to windows of opportunity (crises, scandals, elections); (5) Sequence from awareness to policy to structural change; (6) Apply behavioral economics—make ethical choices easy, visible, and socially rewarded; (7) Sustain pressure over time—transformation requires patience; (8) Measure and communicate progress to maintain momentum.

What should a capstone project include?

A strong capstone project includes: (1) Clear problem definition—specific industry and harm category; (2) Comprehensive harm quantification using DALYs, social costs, and life-cycle analysis; (3) Systems analysis—causal loop diagrams, root cause identification, relevant archetypes; (4) Leverage point identification—where can small changes produce large effects?; (5) Intervention design—specific proposals with behavioral and policy mechanisms; (6) Implementation strategy—who does what, when, with what resources; (7) Success metrics—how will you know if it worked?; (8) Stakeholder analysis—who supports, who opposes, how to build coalitions; (9) Honest acknowledgment of limitations and uncertainties.


Advanced Topics

What is regulatory capture and why does it matter?

Regulatory capture occurs when regulatory agencies serve industry interests over public interests—the "fox guarding the henhouse." It explains why regulation often fails to protect the public from harmful industries. Capture happens through: revolving door employment (regulators taking industry jobs and vice versa), information asymmetry (regulators depending on industry for data), lobbying and political pressure, and cultural identification with the regulated industry. Recognizing capture helps explain why strong laws produce weak enforcement and why seemingly good regulations can be counterproductive. Addressing capture requires structural reforms like independent funding, strict cooling-off periods, and public participation in regulatory processes.

What is greenwashing and how do I identify it?

Greenwashing refers to marketing practices that create misleading impressions of environmental responsibility. Identification strategies: (1) Look for specificity—vague claims ("eco-friendly") are red flags; (2) Check for third-party verification—credible certifications from independent bodies; (3) Compare marketing to actual practices—is the company's core business changing or just its advertising?; (4) Watch for hidden trade-offs—highlighting one green attribute while ignoring larger harms; (5) Beware of irrelevant claims—advertising things that are legally required anyway; (6) Research the company's lobbying—are they fighting the regulations they claim to support? Greenwashing is a form of "Shifting the Burden"—symbolic action substituting for fundamental change.

What is ESG and how reliable are ESG metrics?

ESG (Environmental, Social, and Governance) metrics evaluate corporate sustainability and ethical performance across three dimensions: Environmental (emissions, resource use, pollution), Social (labor practices, community impact, human rights), and Governance (board diversity, executive compensation, shareholder rights). Reliability concerns: (1) No standard methodology—different rating agencies produce different scores for the same company; (2) Self-reported data—companies control what information they disclose; (3) Aggregation problems—combining diverse issues into single scores obscures important variations; (4) Gaming potential—companies can optimize metrics without substantive change. ESG provides useful signals but requires critical evaluation rather than blind reliance.

What is the difference between shareholder primacy and stakeholder capitalism?

Shareholder primacy is the corporate governance principle that prioritizing shareholder returns is management's primary duty—other considerations are secondary. This creates structural pressure toward externalities: if costs can be shifted to society while profits flow to shareholders, shareholder primacy incentivizes that shift. Stakeholder capitalism proposes optimizing value for all stakeholders—workers, communities, environment, suppliers—not just shareholders. This represents a fundamental paradigm shift (high leverage!) that could transform harmful industries. The transition is underway through mechanisms like B Corp certification, benefit corporation legal structures, and evolving fiduciary duty interpretations, but shareholder primacy remains dominant in practice.

What is intergenerational harm and how should we address it?

Intergenerational harm affects future generations who cannot participate in current decisions. Climate change is the paradigm case: emissions today will cause harm for centuries, primarily affecting people not yet born. The ethical challenge is acute because future generations can't vote, protest, or sue. Addressing it requires: (1) Low or zero discount rates for long-term harms—standard discounting makes future suffering nearly disappear from calculations; (2) Precautionary principles—when facing irreversible harm, uncertainty favors caution; (3) Institutional structures that represent future interests; (4) Reframing—many cultures historically made decisions considering seven generations forward. Intergenerational ethics is one of the most important and challenging areas in data-driven ethics.

How do systems change actually happen?

Systems change happens through multiple, reinforcing mechanisms: (1) Information shifts—new evidence makes harm undeniable (tobacco research, climate science); (2) Social norm changes—what was acceptable becomes stigmatized (smoking in public, drunk driving); (3) Policy ratchets—each regulation creates constituency for further regulation; (4) Technology alternatives—new options make harmful systems obsolete (renewable energy, plant-based proteins); (5) Economic pressure—costs of harm exceed profits (litigation, divestment); (6) Crisis and scandal—acute events create windows for structural change; (7) Generational turnover—new cohorts with different values gain power. Successful change-makers work multiple pathways simultaneously, understanding that transformation rarely comes from a single intervention but from reinforcing effects across the system.