Skip to content

Fairness in Ethics

Chapter Overview

In this chapter we explore the concept of fairness as a fundamental ethical principle that has shaped both biological and social evolution. We begin with the remarkable observation that fairness detection appears to be a genetically advantageous characteristic across social animals—not just humans. From there, we examine how conceptions of fairness vary across cultures, change over time, and inform contemporary discussions of equal opportunity and human rights. The chapter culminates with a case study comparing how different AI models identify historical champions of fairness, revealing both consensus and divergence in how we evaluate moral leadership.

Learning Objectives

By the end of this chapter, students will be able to:

  • Explain the evolutionary basis for fairness detection in social animals
  • Analyze how cultural context shapes fairness perceptions across societies
  • Compare different philosophical frameworks for understanding fairness
  • Evaluate the relationship between fairness and human rights frameworks
  • Assess how AI systems reflect and reproduce human biases about fairness

Concepts Covered

  1. Fairness in social animals
  2. Fairness studies in animals
  3. Fairness studies in primates
  4. Fairness as a human value
  5. Fairness as a cultural norm
  6. Fairness and equal opportunity
  7. Fairness changes over time
  8. Fairness in different parts of the world
  9. Gender Fairness
  10. Race Fairness and Skin Color
  11. Fairness in Religions
  12. Leadership and Fairness
  13. Historical Perspectives
  14. Case Study - Equal Opportunity Amendment in the US
  15. Fairness as a human right
  16. Fairness in AI
  17. Case Study: Ranking Fairness Leaders and Architects of Unfairness

Part I: The Evolutionary Roots of Fairness

Fairness as a Biological Adaptation

The human sense of fairness is not merely a cultural invention—it appears to be deeply embedded in our evolutionary heritage. When we feel wronged by an unfair division of resources or cheated in a social exchange, we experience genuine emotional distress. This visceral response suggests that fairness detection provided significant survival advantages for our ancestors living in cooperative social groups.

Evolutionary biologists propose that fairness sensitivity evolved through two complementary mechanisms:

  • Kin selection: Helping genetic relatives increased the propagation of shared genes
  • Reciprocal altruism: Fair exchanges with non-relatives built coalitions that enhanced survival
  • Reputation monitoring: Tracking who cooperates fairly and who cheats enabled optimal partner selection
  • Punishment of cheaters: Groups that sanctioned free-riders maintained more stable cooperation

The emergence of these mechanisms required the cognitive ability to track social exchanges, remember past interactions, and compare outcomes—capabilities that appear to have evolved in multiple social species independently.

Fairness Studies in Non-Human Animals

Research over the past three decades has documented fairness-related behaviors across numerous species, challenging the assumption that moral sentiments are uniquely human.

Species Fairness Behavior Observed Key Study
Capuchin monkeys Rejection of unequal pay Brosnan & de Waal, 2003
Chimpanzees Inequity aversion in food sharing Brosnan et al., 2010
Dogs Refusal to perform for unequal rewards Range et al., 2009
Corvids (crows, ravens) Third-party punishment of cheaters Massen et al., 2015
Rats Reciprocal food sharing Rutte & Taborsky, 2007

These findings suggest that the cognitive architecture supporting fairness judgments predates the human lineage and may be a convergent adaptation in species that rely on cooperation.

Diagram: Evolution of Fairness Detection

View Timeline Fullscreen

Evolution of Fairness Detection Timeline

Type: timeline

Purpose: Illustrate the evolutionary emergence of fairness-related cognitive capacities across species

Bloom Level: Understand (L2) Bloom Verb: explain, compare

Learning Objective: Students will be able to explain how fairness detection evolved as an adaptation in social species

Time period: 300 million years ago to present

Orientation: Horizontal with branching phylogenetic structure

Events: - 300 MYA: Social insects emerge with kin-based cooperation - 65 MYA: Early mammals develop reciprocal grooming behaviors - 35 MYA: Primate ancestors show coalition formation - 25 MYA: Old World monkeys demonstrate inequity aversion - 7 MYA: Hominid ancestors develop complex social tracking - 2 MYA: Early humans show evidence of punishment behaviors - 200,000 years ago: Homo sapiens develops explicit fairness norms - Present: Cross-cultural fairness universals documented

Visual style: Phylogenetic tree with timeline overlay

Color coding: - Blue: Invertebrate/early vertebrate cooperation - Green: Mammalian reciprocity - Orange: Primate social cognition - Red: Human moral systems

Interactive features: - Hover over each node to see species examples - Click to expand research citations

Implementation: vis-timeline with custom styling

The Famous Capuchin Monkey Experiments

Perhaps the most compelling evidence for fairness sensitivity in non-human animals comes from Frans de Waal's research with capuchin monkeys at Emory University. In these experiments, two monkeys were placed in adjacent cages and trained to exchange tokens for food rewards.

When both monkeys received cucumber slices for their tokens, they happily completed the exchange. However, when researchers gave one monkey grapes (a preferred treat) while the other continued receiving cucumber, the cucumber-receiving monkey exhibited dramatic protest behaviors:

  • Refusing to eat the cucumber
  • Throwing the cucumber back at researchers
  • Shaking the cage in apparent frustration
  • Refusing to continue the exchange task

The Inequity Aversion Response

The monkey's reaction wasn't simply about wanting grapes—it was about the perceived unfairness of the unequal treatment. Monkeys who couldn't see their partner's reward showed no distress at receiving cucumber. The emotional response required awareness of the inequitable distribution.

This "inequity aversion" demonstrates that fairness detection operates at an emotional, not just cognitive, level. The monkey experiences genuine distress at unfair treatment, much as humans do when we perceive injustice.

Implications for Human Ethics

The discovery of fairness sensitivity in other species has profound implications for how we understand human morality:

  1. Fairness is not arbitrary: The consistency of fairness intuitions across species and cultures suggests these judgments track something real about social cooperation requirements

  2. Emotions are essential: Fairness operates through emotional systems, not pure reason—we feel unfairness before we can articulate why something is wrong

  3. Context matters: The same division might feel fair or unfair depending on social relationships, history, and expectations

  4. Fairness serves cooperation: These mechanisms evolved because they enable stable cooperative relationships essential for survival


Part II: Fairness Across Human Cultures

Fairness as a Human Value

While fairness sensitivity appears universal, the specific norms and rules that govern fair behavior vary significantly across cultures. Anthropological research reveals both striking commonalities and important differences in how human societies conceptualize and enforce fairness.

Cross-cultural research using economic games like the Ultimatum Game has documented that every society studied shows some form of fairness concern—no culture treats purely self-interested behavior as acceptable. However, what counts as a "fair" offer varies considerably:

Cultural Context Typical "Fair" Offer Rejection Threshold
Western industrial 40-50% Below 20%
Small-scale horticultural 25-40% Below 10%
Hunter-gatherer 30-50% Varies widely
Market-integrated 40-50% Below 25%

These differences correlate with economic systems, social organization, and religious beliefs—suggesting that while the capacity for fairness is innate, its expression is shaped by cultural learning.

Cultural Dimensions of Fairness

Researchers have identified several dimensions along which fairness conceptions vary across cultures:

Individualism vs. Collectivism

In individualistic cultures (common in Western Europe and North America), fairness often emphasizes equal treatment of individuals regardless of group membership. In collectivist cultures (common in East Asia, Africa, and Latin America), fairness may prioritize group harmony, social hierarchy, or family obligations.

Equity vs. Equality

  • Equity principle: Resources should be distributed proportional to contribution or merit
  • Equality principle: Resources should be distributed equally regardless of contribution
  • Need principle: Resources should go to those who need them most

Different cultures weight these principles differently, and even within cultures, the appropriate principle may depend on the domain (workplace vs. family vs. civic life).

Diagram: Cultural Fairness Frameworks

Cultural Fairness Frameworks Comparison

Type: infographic

Purpose: Compare how different cultural traditions conceptualize fairness along key dimensions

Bloom Level: Analyze (L4) Bloom Verb: compare, contrast, differentiate

Learning Objective: Students will be able to compare different cultural frameworks for understanding fairness

Layout: Interactive matrix with cultural traditions as rows and fairness dimensions as columns

Cultural Traditions: - Western Liberal (Rawls, individual rights) - Confucian (hierarchical harmony, role-based duties) - Ubuntu (African communal interdependence) - Indigenous American (reciprocity with nature, seven generations) - Islamic (divine justice, zakat) - Utilitarian (aggregate welfare)

Fairness Dimensions: - Individual vs. Collective focus - Process vs. Outcome emphasis - Equality vs. Equity vs. Need - Temporal scope (present vs. future generations) - Scope of moral community (humans only vs. broader)

Interactive elements: - Click cell to see detailed explanation and examples - Hover to see key philosophers/thinkers - Toggle to highlight similarities vs. differences

Color coding: Gradient showing emphasis level for each dimension

Implementation: HTML/CSS/JavaScript interactive matrix

Fairness Changes Over Time

One of the most important insights from historical analysis is that fairness conceptions are not static—they evolve, often dramatically, over time. Practices once considered perfectly fair are now recognized as profound injustices.

Consider these historical shifts in Western societies:

  • Slavery: Once legally sanctioned and morally defended, now universally condemned
  • Women's suffrage: Denying women the vote was once "natural," now unthinkable
  • Child labor: Common practice in 19th century, now prohibited
  • LGBTQ+ rights: Criminalization giving way to marriage equality in many nations
  • Animal welfare: Gradual expansion of moral consideration to non-human animals

The Humility Lesson

If our ancestors were so wrong about fundamental questions of fairness, we should be humble about our own certainties. Future generations may judge some of our current practices as obviously unfair.

This pattern suggests that moral progress is possible but not automatic—it requires advocacy, evidence, and the expansion of moral imagination to include previously excluded groups.

Global Variations in Fairness Today

Contemporary societies continue to differ significantly in their fairness institutions and outcomes:

Economic Inequality (Gini Coefficient)

  • Most equal: Nordic countries (0.25-0.28)
  • Moderately equal: Western Europe, Canada (0.30-0.35)
  • Moderately unequal: United States, China (0.38-0.45)
  • Most unequal: South Africa, Brazil (0.50-0.65)

Gender Equality (Global Gender Gap Index)

  • Top performers: Iceland, Finland, Norway
  • Mid-range: United States, France, China
  • Largest gaps: Afghanistan, Pakistan, Iraq

These variations reflect different cultural values, historical trajectories, political systems, and economic structures. They also demonstrate that fairness outcomes are not predetermined—policy choices matter.


Part III: Fairness in Specific Domains

Gender Fairness

Gender fairness remains one of the most contested domains of contemporary ethics. Across virtually every society, gender has served as a basis for differential treatment—but opinions differ sharply on which differences are fair and which constitute discrimination.

Key dimensions of gender fairness include:

  • Economic opportunity: Equal pay, promotion, occupational access
  • Political representation: Voting rights, elected office, policy influence
  • Reproductive rights: Bodily autonomy, healthcare access
  • Domestic labor: Division of unpaid caregiving and household work
  • Freedom from violence: Protection from gender-based violence and harassment

Progress has been uneven across these dimensions. Many countries have achieved formal legal equality while significant gaps persist in outcomes. The United States, for example, guarantees equal voting rights but has never ratified the Equal Rights Amendment.

Race, Ethnicity, and Skin Color

Racial and ethnic fairness represents another domain where conceptions of justice have shifted dramatically over time—and where significant disagreements persist.

Historical practices now recognized as profoundly unfair:

  • Slavery and human trafficking based on race
  • Colonialism and exploitation of indigenous peoples
  • Apartheid and legal segregation
  • Immigration restrictions based on national origin

Contemporary debates center on:

  • Affirmative action: Does fairness require identical treatment or compensatory measures?
  • Systemic racism: Can institutions be unfair even without individual discriminatory intent?
  • Reparations: Do historical injustices create present-day fairness obligations?
  • Cultural appropriation vs. appreciation: Where are the boundaries?

These questions illustrate how fairness involves not just individual actions but also institutional structures and historical legacies.

Fairness in World Religions

Major religious traditions have developed sophisticated frameworks for understanding fairness, often grounding moral obligations in divine command or cosmic order.

Religion Core Fairness Concept Key Practice
Judaism Tzedakah (righteous giving) Tithing, social justice advocacy
Christianity Imago Dei (image of God) Charity, preferential option for poor
Islam Zakat (mandatory almsgiving) 2.5% wealth redistribution
Buddhism Karma (moral causation) Dana (generosity), non-harm
Hinduism Dharma (cosmic order/duty) Caste-appropriate conduct, charity
Confucianism Ren (benevolence) Reciprocity, role-based duties

While these traditions share concern for the vulnerable and emphasis on reciprocity, they differ in their:

  • Scope of moral community (believers only vs. universal)
  • Basis for moral obligation (divine command vs. natural law)
  • Emphasis on equality vs. hierarchy
  • This-worldly vs. other-worldly orientation

Part IV: Fairness and Human Rights

The Universal Declaration of Human Rights

The 1948 Universal Declaration of Human Rights (UDHR) represents humanity's most ambitious attempt to codify universal fairness principles. Drafted in the aftermath of World War II, the UDHR articulates rights that should belong to every person regardless of nationality, culture, or circumstance.

Key fairness-related articles include:

  • Article 1: All human beings are born free and equal in dignity and rights
  • Article 2: Everyone entitled to rights without discrimination
  • Article 7: All are equal before the law
  • Article 23: Right to work, equal pay for equal work
  • Article 25: Right to adequate standard of living

The UDHR has influenced countless national constitutions and international treaties, establishing a global vocabulary for fairness claims.

Equal Opportunity Framework

Contemporary discussions of fairness often center on the concept of "equal opportunity"—the idea that individuals should have similar chances to succeed regardless of circumstances of birth.

Different interpretations of equal opportunity include:

  1. Formal equality: No legal barriers based on protected characteristics
  2. Fair equality of opportunity: Active removal of structural barriers
  3. Luck egalitarianism: Compensation for undeserved disadvantages
  4. Capabilities approach: Ensuring everyone has the means to pursue valued functionings

The Meritocracy Debate

Is a "meritocracy" fair if people's abilities themselves result from unjust advantages? If wealthy parents can buy better education, tutoring, and networks for their children, can we say outcomes reflect "merit"?

Case Study: The Equal Rights Amendment in the United States

The Equal Rights Amendment (ERA) provides a fascinating case study in how fairness principles become (or fail to become) law.

Timeline:

  • 1923: ERA first introduced in Congress
  • 1972: Passed by Congress, sent to states for ratification
  • 1982: Original ratification deadline expired (35 of 38 states)
  • 2020: Virginia becomes 38th state to ratify
  • Present: Legal status remains contested

The Proposed Text: "Equality of rights under the law shall not be denied or abridged by the United States or by any State on account of sex."

Arguments For:

  • Guarantees explicit constitutional protection against sex discrimination
  • Provides clear legal standard for courts
  • Symbolic importance of constitutional commitment to gender equality

Arguments Against:

  • Existing laws already prohibit sex discrimination
  • Could eliminate sex-specific protections (women's shelters, sports)
  • May require changes to military service, bathrooms, etc.

The ERA debate illustrates how abstract fairness principles must be translated into specific legal rules—and how reasonable people can disagree about the implications.


Part V: Fairness and Artificial Intelligence

How AI Systems Reflect Human Biases

As artificial intelligence systems increasingly make decisions affecting human lives, questions of algorithmic fairness have become urgent. AI systems can perpetuate, amplify, or even create unfairness in several ways:

  • Training data bias: Systems learn from historical data that may reflect past discrimination
  • Proxy discrimination: Using variables correlated with protected characteristics
  • Feedback loops: Biased outputs become inputs for future training
  • Opacity: Complex models make discrimination difficult to detect

Documented examples of AI unfairness include:

  • Facial recognition systems with higher error rates for darker-skinned faces
  • Hiring algorithms that disadvantaged women
  • Criminal risk assessment tools with racial disparities
  • Healthcare algorithms that underestimated Black patients' needs

Competing Definitions of Algorithmic Fairness

Computer scientists have proposed numerous formal definitions of fairness for algorithmic systems. Unfortunately, these definitions are often mathematically incompatible—a system cannot satisfy all of them simultaneously.

Fairness Definition Meaning Limitation
Demographic parity Equal positive rates across groups Ignores legitimate differences
Equalized odds Equal true/false positive rates Requires knowing true outcomes
Individual fairness Similar people treated similarly "Similarity" is undefined
Counterfactual fairness Same decision if protected attribute differed Requires causal modeling

This impossibility result echoes deeper philosophical debates about fairness—there is no neutral, context-free definition that everyone will accept.

Diagram: AI Fairness Trade-offs

AI Fairness Definitions Trade-off Explorer

Type: microsim

Purpose: Allow students to explore trade-offs between different algorithmic fairness definitions

Bloom Level: Evaluate (L5) Bloom Verb: assess, judge, critique

Learning Objective: Students will be able to evaluate the trade-offs between competing fairness definitions in AI systems

Canvas layout: - Left (500px): Interactive visualization showing two groups and outcomes - Right (200px): Controls and metrics panel

Visual elements: - Two populations (Group A and Group B) shown as dot distributions - Decision threshold line that can be adjusted - Color coding: green for correct decisions, red for errors - Real-time fairness metrics display

Interactive controls: - Slider: Decision threshold - Toggle: Equal base rates vs. different base rates - Dropdown: Fairness metric to optimize (demographic parity, equalized odds, etc.) - Button: "Auto-optimize for selected metric"

Data Visibility Requirements: - Stage 1: Show raw populations with true labels - Stage 2: Show current threshold and resulting decisions - Stage 3: Calculate and display all fairness metrics - Stage 4: Highlight which metrics are satisfied/violated

Instructional Rationale: This MicroSim allows students to directly experience the impossibility theorem—that optimizing for one fairness metric necessarily violates others—rather than just reading about it.

Default parameters: - Threshold: 0.5 - Base rates: Equal (50% positive in each group) - Metric: Demographic parity

Implementation: p5.js with interactive threshold adjustment


Part VI: Case Study—AI Models Evaluate Historical Fairness Leaders

Comparing AI Perspectives on Moral Leadership

One fascinating way to explore how fairness is understood and evaluated is to ask different AI systems the same question about moral leadership. In January 2026, we posed identical prompts to four leading AI systems: Claude (Anthropic), ChatGPT (OpenAI), Grok (xAI), and DeepSeek.

The Prompt

"Please generate two lists. The first list is a list of famous people that tried to make the world more fair for everyone on Earth. Then create a second list. This is a list of people that have made the world more unfair. For each person, create a short description of their actions and how history has regarded their actions."

The results reveal both remarkable consensus and interesting divergences in how these AI systems—trained on different datasets with different approaches—evaluate moral leadership.

You can see the results in the following links:

Diagram: Champions of Fairness - Architects of Unfairness

The visualization below shows which historical figures were identified by each AI model as champions of fairness. Figures in the center were named by all four models, while those in outer regions were unique to specific models.

Key Findings: Champions of Fairness

Universal Consensus (All 4 Models)

Three figures appeared on every list:

  1. Nelson Mandela - Anti-apartheid leader who chose reconciliation over revenge
  2. Mahatma Gandhi - Pioneer of nonviolent resistance for Indian independence
  3. Martin Luther King Jr. - Civil rights leader whose advocacy transformed American law

Strong Consensus (3 of 4 Models)

  • Malala Yousafzai - Education activist (Claude, OpenAI, Grok)
  • Eleanor Roosevelt - Architect of Universal Declaration of Human Rights (Claude, OpenAI, DeepSeek)
  • Harriet Tubman - Underground Railroad conductor (Claude, Grok, DeepSeek)

Unique Selections

Each model also included figures not mentioned by others:

  • Claude: Sojourner Truth, Desmond Tutu
  • Grok: Susan B. Anthony, Abraham Lincoln
  • DeepSeek: Muhammad Yunus (microfinance), Wangari Maathai (environmental justice), Greta Thunberg (climate activism)

Key Findings: Architects of Unfairness

Universal Consensus (All 4 Models)

Only two figures appeared on every unfairness list:

  1. Pol Pot - Architect of Cambodian genocide
  2. Leopold II of Belgium - Brutal exploitation of the Congo

Notable Divergence

Surprisingly, Hitler and Stalin appeared on only three lists—DeepSeek did not include them, instead focusing on figures like Josef Mengele, Cecil Rhodes, and Roger Taney (author of the Dred Scott decision).

Analysis: What the Differences Reveal

The variations between AI models are instructive:

  1. Training data matters: Different corpora emphasize different historical figures and narratives

  2. Cultural perspectives: DeepSeek's inclusion of figures like the "Robber Barons" suggests different emphases on economic vs. political unfairness

  3. Temporal scope: Some models focused on recent figures (Thunberg), others on historical ones (Lincoln)

  4. Geographic diversity: DeepSeek showed more global diversity, including African figures like Wangari Maathai

  5. Consensus indicates strength: The universal agreement on Mandela, Gandhi, and King suggests these figures have achieved near-universal recognition as moral exemplars

Discussion Questions

  1. Consensus and truth: Does agreement across AI models provide evidence that these judgments are "correct"? Or might all models share the same biases?

  2. Missing figures: Who is conspicuously absent from these lists? What might explain the omissions?

  3. Moral complexity: Some historical figures (Lincoln, Gandhi) have been criticized for certain views or actions. How should we evaluate leaders with mixed legacies?

  4. AI as moral judge: Should we use AI systems to evaluate moral character? What are the risks and benefits?

  5. Training data influence: How might changing the training data change these results? What responsibilities do AI developers have regarding moral content?


Summary and Key Takeaways

This chapter has explored fairness from its evolutionary origins to its contemporary manifestations in AI systems. Several themes emerge:

Fairness is both universal and particular

The capacity for fairness judgments appears to be a human (and perhaps primate) universal, but the specific norms governing fair behavior vary across cultures and change over time.

Fairness operates through emotion as well as reason

We feel unfairness viscerally before we can articulate why something is wrong. This emotional foundation has important implications for how we design institutions and interventions.

Multiple fairness principles can conflict

Equality, equity, and need-based distribution all have legitimate claims but cannot always be simultaneously satisfied. Wisdom involves recognizing which principle applies in which context.

Historical progress is possible but not guaranteed

Practices once considered fair (slavery, denying women the vote) are now recognized as profound injustices. This should inspire both hope and humility about our own moral certainties.

Technology reflects human values

AI systems embed and amplify human conceptions of fairness—including our biases and blind spots. Developing fair AI requires grappling with contested questions that have no purely technical solutions.


References and Further Reading

Academic Sources

  • Brosnan, S. F., & de Waal, F. B. (2003). Monkeys reject unequal pay. Nature, 425(6955), 297-299.
  • Rawls, J. (1971). A Theory of Justice. Harvard University Press.
  • Sen, A. (2009). The Idea of Justice. Harvard University Press.
  • Henrich, J., et al. (2010). Markets, religion, community size, and the evolution of fairness and punishment. Science, 327(5972), 1480-1484.

AI Model Responses