Skip to content

Natural Sciences and the Scientific Method

Welcome, Knowledge Explorers!

Sofia waving welcome Welcome to one of the most powerful — and most misunderstood — systems of knowledge production ever devised: the natural sciences. You rely on scientific knowledge every day, from the medicine you take to the weather forecast you check. But how do we know that scientific claims are trustworthy? What makes science different from other ways of knowing? And can science ever be wrong? Let's find out together.

The Scientific Method: A Framework for Inquiry

When people speak of "the scientific method," they usually imagine a neat, step-by-step procedure: observe, hypothesize, test, conclude. But the reality is far more nuanced. The scientific method is not a single rigid recipe — it is a family of systematic approaches for investigating the natural world, testing ideas against evidence, and revising conclusions in light of new data.

At its core, the scientific method involves several interconnected practices:

  • Observation: Noticing patterns, phenomena, or puzzles in the natural world
  • Question formulation: Asking specific, testable questions about those observations
  • Hypothesis generation: Proposing tentative explanations that can be tested
  • Experimentation and data collection: Designing procedures to gather evidence
  • Analysis: Interpreting results using logical and statistical reasoning
  • Conclusion and communication: Drawing inferences and sharing findings with the scientific community

What makes this framework epistemologically distinctive is its commitment to empirical evidence — knowledge claims in natural science must ultimately be grounded in observation and measurement, not authority, tradition, or intuition alone. As you learned in Chapter 3, empirical evidence is evidence that can be observed, measured, or replicated. The natural sciences place this kind of evidence at the very center of knowledge production.

However, it would be misleading to suggest that scientists follow these steps in a fixed order. In practice, scientific inquiry is messy, creative, and iterative. A surprising experimental result may lead a researcher back to the observation stage. A failed experiment may generate a better hypothesis than the original one. The "method" is better understood as a set of guiding principles rather than a strict algorithm.

Hypothesis Testing and Controlled Experiments

A hypothesis is a tentative, testable explanation for an observed phenomenon. The process of hypothesis testing involves designing an investigation to determine whether the available evidence supports or undermines the hypothesis. This is where abstract ideas meet concrete reality.

The gold standard for hypothesis testing in many natural sciences is the controlled experiment. In a controlled experiment, the researcher manipulates one variable (the independent variable) while holding all other conditions constant, then measures the effect on another variable (the dependent variable). Any group not receiving the experimental treatment serves as a control group, providing a baseline for comparison.

Consider a simple example: a biologist wants to test whether a new fertilizer increases plant growth. She sets up two groups of identical plants in identical conditions. One group receives the fertilizer; the other does not. After four weeks, she measures the height of each plant. The only difference between the two groups is the fertilizer — so any difference in growth can reasonably be attributed to it.

Element Description Example
Independent variable What the researcher changes Fertilizer (present or absent)
Dependent variable What the researcher measures Plant height after four weeks
Control group Group without the treatment Plants with no fertilizer
Controlled variables Conditions kept the same Light, water, soil, temperature

This design is powerful because it isolates cause and effect. But not all scientific questions can be investigated through controlled experiments. Astronomers cannot manipulate stars. Geologists cannot replay Earth's history. Ecologists often study systems too complex for laboratory control. This is where other methods become essential.

Quantitative and Qualitative Methods

Scientific inquiry draws on two broad categories of evidence. Quantitative methods involve the collection and analysis of numerical data — measurements, counts, rates, and statistical patterns. A physicist measuring the acceleration due to gravity, a chemist calculating reaction rates, or an epidemiologist tracking infection rates are all using quantitative methods. The strength of quantitative approaches lies in their precision, their capacity for statistical analysis, and their reproducibility.

Qualitative methods, by contrast, involve the collection and interpretation of non-numerical data — descriptions, classifications, observations of behavior, or analysis of patterns. A field biologist describing the mating rituals of a bird species, or a geologist classifying rock formations by their visible characteristics, is employing qualitative methods. Qualitative data provides richness, context, and nuance that numbers alone cannot capture.

In practice, most scientific research uses both. A marine biologist might quantitatively measure water temperature and salinity while qualitatively describing coral bleaching patterns. The two approaches complement each other, each compensating for the other's limitations.

Diagram: Quantitative vs. Qualitative Methods in Science

Quantitative vs. Qualitative Methods in Science

Type: infographic sim-id: quant-qual-methods
Library: p5.js
Status: Specified

Bloom Level: Analyze (L4) Bloom Verb: Compare Learning Objective: Compare quantitative and qualitative methods by identifying their strengths, limitations, and appropriate applications in scientific inquiry.

Instructional Rationale: A side-by-side interactive comparison allows students to explore concrete examples of each method type and see how they complement one another in real scientific investigations.

Visual elements: - Two columns: "Quantitative" (left) and "Qualitative" (right) - Each column contains 4-5 example cards showing a research scenario - A center zone labeled "Mixed Methods" with arrows showing how both approaches combine - Color-coded strength indicators (precision, context, reproducibility, richness)

Interactive controls: - Click any example card to expand a brief case study (2-3 sentences) - Toggle button to show/hide "Strengths and Limitations" for each method - Dropdown to filter examples by discipline (biology, chemistry, physics, earth science)

Default state: Both columns visible with example cards collapsed.

Color scheme: Quantitative = teal, Qualitative = amber, Mixed Methods = coral

Responsive design: Canvas resizes to fit container width. Columns stack vertically on narrow screens.

Implementation: p5.js with clickable regions and createSelect() for filtering

Falsifiability: The Criterion That Defines Science

One of the most influential ideas in the philosophy of science comes from the philosopher Karl Popper: falsifiability. A hypothesis or theory is falsifiable if it is possible, in principle, to describe an observation or experiment that would prove it wrong. This does not mean the theory is wrong — it means it could be shown to be wrong if the evidence went against it.

Why does this matter? Because a claim that cannot possibly be disproven is not really saying anything testable about the world. Consider two claims:

  • "All metals expand when heated." — This is falsifiable. If you found a metal that contracted when heated, the claim would be disproven.
  • "Invisible spirits control all chemical reactions, but they leave no detectable trace." — This is unfalsifiable. No possible observation could count against it, because the claim has been constructed to be immune to evidence.

Popper argued that falsifiability is what separates science from non-science. Scientific theories stick their necks out — they make specific predictions that could turn out to be wrong. When a prediction survives rigorous testing, our confidence in the theory grows. When a prediction fails, the theory must be revised or abandoned.

Sofia's Reflection

Sofia thinking Notice something surprising here: science progresses not just by confirming what we believe, but by trying to disprove it. A theory that has survived many serious attempts at falsification is more trustworthy than one that has never been tested at all. What does this tell us about the relationship between doubt and knowledge?

Unfalsifiable claims — statements that are structured so that no possible evidence could count against them — fall outside the scope of scientific inquiry. This does not necessarily mean they are meaningless or unimportant. Many ethical, aesthetic, and metaphysical claims are unfalsifiable. But it does mean they cannot be evaluated using the methods of natural science. Recognizing unfalsifiable claims is a critical skill for any knowledge explorer.

Replication and the Replication Crisis

A single experiment, no matter how well designed, is not enough to establish scientific knowledge. Replication — the process of repeating an experiment or study to see whether the same results can be obtained — is essential for building confidence in scientific findings. If only one laboratory in the world can produce a particular result, scientists are right to be skeptical. Genuine scientific knowledge should be reproducible by independent researchers following the same procedures.

Replication serves as a powerful check against errors, biases, and fraud. A result that replicates across different laboratories, with different researchers, under varying conditions, earns a much higher degree of trust than one that has been demonstrated only once.

Yet in recent years, the scientific community has confronted a disturbing trend: the replication crisis. Across fields including psychology, medicine, and biology, systematic efforts to replicate published findings have revealed that a significant proportion of results cannot be reproduced. A landmark 2015 study by the Open Science Collaboration attempted to replicate 100 psychology studies and found that only about 36% produced results consistent with the originals.

The causes of the replication crisis are complex:

  • Publication bias: Journals have historically preferred to publish positive, novel results rather than null findings or replications
  • Small sample sizes: Studies with too few participants can produce results driven by chance
  • Flexible analysis methods: Researchers may — sometimes unconsciously — adjust their statistical methods until they find a significant result (a practice called "p-hacking")
  • Pressure to publish: The academic incentive structure rewards quantity and novelty over rigor and replication

The replication crisis does not mean that science is broken. Rather, it has prompted a healthy reckoning with the methods and incentive structures of scientific research. Many fields are now adopting reforms such as pre-registration of studies, larger sample sizes, and open data practices.

Peer Review and Scientific Consensus

Before a scientific finding enters the body of shared knowledge, it typically passes through peer review — a process in which other experts in the same field evaluate the research for its methodology, reasoning, and conclusions. Peer reviewers check whether the experiment was properly designed, whether the data supports the conclusions, and whether alternative explanations were adequately considered.

Peer review is not perfect. Reviewers may have their own biases, they may miss errors, and the process can be slow. But it provides a crucial layer of quality control that distinguishes scientific knowledge from mere opinion. When someone says "the study was published in a peer-reviewed journal," they are signaling that the work has been scrutinized by knowledgeable experts before being accepted.

Over time, as evidence accumulates from many independent studies, peer-reviewed findings, and theoretical developments, the scientific community may reach scientific consensus — a general agreement among experts about the best available explanation for a phenomenon. The consensus that the Earth's climate is warming due to human activity, for example, is not based on a single study but on thousands of independent investigations converging on the same conclusion.

Watch Out!

Sofia warning Be careful not to confuse scientific consensus with mere popular opinion. Consensus in science is built on evidence, rigorous methods, and expert evaluation — not on counting votes. At the same time, consensus is not infallible. History shows that scientific consensus can be overturned when new evidence demands it. The strength of science lies precisely in this willingness to revise.

Theories, Laws, and Scientific Models

Three terms that students frequently confuse are theory, law, and model. Understanding the distinctions is essential for grasping the structure of scientific knowledge.

A scientific theory is a well-substantiated explanation of some aspect of the natural world that has been repeatedly tested and confirmed through observation and experimentation. Contrary to everyday usage, where "theory" often means "guess," a scientific theory represents one of the highest levels of confidence in science. The theory of evolution by natural selection, the germ theory of disease, and the theory of plate tectonics are all examples — each is supported by vast bodies of evidence from multiple independent lines of inquiry.

A scientific law describes a regular, observable pattern in nature, often expressed mathematically. Newton's law of universal gravitation, for instance, describes how objects attract each other:

\[ F = G \frac{m_1 m_2}{r^2} \]

where \( F \) is the gravitational force, \( G \) is the gravitational constant, \( m_1 \) and \( m_2 \) are the masses, and \( r \) is the distance between their centers.

A law tells you what happens; a theory tells you why it happens. Laws do not "graduate" into theories — they are different kinds of knowledge claims.

A scientific model is a simplified representation of a complex system, designed to help scientists understand, predict, or communicate about phenomena. The Bohr model of the atom, climate simulation models, and the double helix model of DNA are all examples. Models are powerful precisely because they simplify — but this also means they have limitations. Every model leaves something out.

Concept What It Does Example Limitations
Theory Explains why something happens Theory of evolution May be revised with new evidence
Law Describes what happens (pattern) Newton's law of gravitation Does not explain the mechanism
Model Represents a complex system simply Bohr model of the atom Simplifies; omits details

Diagram: Theory, Law, and Model Relationships

Theory, Law, and Model Relationships

Type: diagram sim-id: theory-law-model
Library: p5.js
Status: Specified

Bloom Level: Understand (L2) Bloom Verb: Distinguish Learning Objective: Distinguish between scientific theories, laws, and models by identifying the function and scope of each.

Instructional Rationale: Students commonly believe that laws are "higher" than theories. An interactive diagram showing their distinct roles and relationships corrects this misconception visually.

Visual elements: - Three labeled nodes: "Theory," "Law," and "Model" arranged in a triangle - Connecting arrows with labels describing their relationships (e.g., "Theories explain the mechanisms behind laws," "Models simplify theories for prediction") - Example cards attached to each node that expand on click - A "Common Misconception" callout: "Laws do NOT become theories. They serve different functions."

Interactive controls: - Click each node to expand 2-3 concrete examples with brief descriptions - Hover over relationship arrows to see explanatory text - Toggle between "Science Examples" and "Everyday Analogies" modes

Default state: Triangle layout with all three nodes visible, examples collapsed.

Color scheme: Theory = teal, Law = amber, Model = coral, arrows = dark gray

Responsive design: Canvas resizes to fit container width. Triangle repositions for narrow screens.

Implementation: p5.js with clickable nodes and hover detection

Paradigms and Normal Science

In 1962, the physicist and historian of science Thomas Kuhn published The Structure of Scientific Revolutions, a book that transformed how philosophers, historians, and scientists themselves think about the progress of science. Kuhn introduced several concepts that have become central to the philosophy of science.

A paradigm is a framework of assumptions, methods, standards, and exemplary achievements that defines how a scientific community understands and investigates the world during a given period. The paradigm tells scientists what questions are worth asking, what methods are appropriate, what counts as evidence, and what a satisfactory explanation looks like. Newtonian mechanics was a paradigm. Darwinian evolutionary biology is a paradigm. The standard model of particle physics is a paradigm.

During periods of normal science, scientists work within the established paradigm. They solve what Kuhn called "puzzles" — specific problems whose solutions are expected to be achievable within the paradigm's framework. Normal science is productive and cumulative: it refines measurements, extends theories to new cases, and fills in gaps. The vast majority of scientific work — across all historical periods — is normal science.

Normal science is not mindless or uncreative. Solving puzzles within a paradigm requires ingenuity, persistence, and deep expertise. But it operates under a shared set of assumptions that most practitioners do not question. The paradigm provides the rules of the game.

Anomalies, Paradigm Shifts, and Scientific Revolutions

What happens when normal science encounters results that the paradigm cannot explain? Kuhn called these anomalies — experimental findings or observations that resist explanation within the current framework. A single anomaly rarely overthrows a paradigm. Scientists first attempt to explain anomalies within the existing framework: perhaps the measurement was wrong, or a secondary factor was overlooked.

But when anomalies accumulate — when the paradigm increasingly struggles to account for the evidence — a period of crisis emerges. Scientists begin to question the foundational assumptions they had taken for granted. New, competing frameworks are proposed. Eventually, if a new framework proves more successful at explaining the evidence, the scientific community undergoes a paradigm shift — a fundamental change in the basic assumptions, methods, and theories of a discipline.

Kuhn called these dramatic transitions scientific revolutions. History offers striking examples:

  • The Copernican Revolution: The shift from an Earth-centered (geocentric) to a Sun-centered (heliocentric) model of the solar system
  • The Chemical Revolution: The replacement of phlogiston theory with Lavoisier's oxygen-based theory of combustion
  • The Einsteinian Revolution: The replacement of Newtonian mechanics with Einstein's theories of relativity for extreme speeds and gravitational fields
  • The Plate Tectonics Revolution: The acceptance of continental drift and plate tectonics over static-Earth geology

Diagram: Kuhn's Cycle of Scientific Revolutions

Kuhn's Cycle of Scientific Revolutions

Type: diagram sim-id: kuhn-cycle
Library: p5.js
Status: Specified

Bloom Level: Analyze (L4) Bloom Verb: Trace Learning Objective: Trace the stages of Kuhn's model of scientific revolutions by identifying how paradigms form, enter crisis, and are replaced.

Instructional Rationale: A cyclical diagram allows students to see scientific revolutions as a recurring process rather than a one-time event, reinforcing Kuhn's central insight about the structure of scientific change.

Visual elements: - A circular flow diagram with five stages: "Pre-Science" → "Normal Science" → "Anomalies / Crisis" → "Revolution" → "New Paradigm" → (back to "Normal Science") - Each stage is a clickable node with a brief description - Historical examples placed alongside each stage (e.g., Copernican revolution mapped to the cycle) - Animated arrows showing the direction of flow

Interactive controls: - Click each stage to expand a description and historical example - A timeline slider at the bottom to move through a specific historical revolution step by step - Dropdown to select which revolution to trace: Copernican, Chemical, Einsteinian, Plate Tectonics

Default state: Full cycle visible with stages labeled, examples collapsed.

Color scheme: Normal Science = teal, Crisis = coral, Revolution = amber, arrows = dark gray

Responsive design: Canvas resizes to fit container width. Node positions recalculate on resize.

Implementation: p5.js with animated transitions and createSelect() for revolution selector

Key Insight

Sofia thinking Kuhn's framework raises a profound epistemological question: if scientists working within different paradigms see the world differently — asking different questions, using different methods, even interpreting the same data differently — can we really say that science is steadily getting closer to the truth? Or is it more like switching between different maps, each useful but none complete? What perspective might we be missing?

One of Kuhn's most provocative claims was that paradigms are incommensurable — that scientists working in different paradigms may not even be able to fully understand each other, because the same terms can mean different things in different frameworks. While this claim remains debated, it highlights an important epistemological point: the framework through which we interpret evidence shapes what we see.

Scientific Skepticism and Pseudoscience

Scientific skepticism is the practice of questioning claims and demanding adequate evidence before accepting them. It is not the same as cynicism or reflexive doubt. A scientific skeptic does not refuse to believe anything — rather, they proportion their belief to the evidence. Extraordinary claims require extraordinary evidence. Familiar claims supported by well-established evidence can be accepted more readily.

Scientific skepticism is the healthy form of skepticism you explored in Chapter 8, applied specifically to claims about the natural world. It asks: What is the evidence? Has the claim been tested? Can the results be replicated? Has it survived peer review?

In contrast, pseudoscience refers to claims, beliefs, or practices that are presented as scientific but do not adhere to the methods and standards of genuine science. Pseudoscience often mimics the vocabulary and appearance of science — using technical-sounding language, citing impressive-seeming studies, or appealing to authority figures — while lacking the substance.

Common features of pseudoscience include:

  • Claims that are unfalsifiable — structured so that no evidence could ever disprove them
  • Reliance on anecdotal evidence rather than systematic study
  • Resistance to peer review and external scrutiny
  • Absence of a plausible mechanism consistent with established science
  • Claims of certainty rather than acknowledgment of uncertainty
  • Appeal to ancient wisdom, authority, or popularity rather than evidence

Examples of pseudoscience include astrology, homeopathy, and the claim that the Earth is flat. Each of these presents itself using the trappings of science — star charts, dilution ratios, measurement disputes — but fails to meet the standards of evidence, falsifiability, and peer review that define genuine scientific inquiry.

The Demarcation Problem

The question of where exactly to draw the line between science and non-science is known as the demarcation problem — one of the central questions in the philosophy of science. It sounds straightforward: surely we can tell the difference between physics and astrology? But the boundary turns out to be surprisingly difficult to define precisely.

Popper proposed falsifiability as the criterion. But critics have pointed out limitations: some genuinely scientific claims are difficult to falsify directly (string theory, for example), and some pseudoscientific claims do make falsifiable predictions that happen to fail without their proponents abandoning them.

Other proposed criteria include:

  • Methodological rigor: Does the practice use controlled experiments, systematic observation, and statistical analysis?
  • Self-correction: Does the field revise its claims in response to new evidence?
  • Peer review: Are the claims subject to scrutiny by independent experts?
  • Consistency with established knowledge: Do the claims fit with or build upon what is already well-established?
  • Progress: Does the field accumulate new knowledge over time?

No single criterion perfectly separates science from non-science. Most philosophers of science now favor a multi-criteria approach, recognizing that science is defined by a cluster of features rather than a single bright line. The demarcation problem remains open — and that is itself a valuable epistemological lesson.

Sofia's Tip

Sofia giving a tip When evaluating whether a claim is scientific or pseudoscientific, don't rely on just one criterion. Ask a cluster of questions: Is the claim falsifiable? Has it been tested? Can the results be replicated? Has it been peer-reviewed? Does it self-correct when evidence contradicts it? The more of these a claim fails, the more suspicious you should be. This multi-criteria approach will serve you well in your TOK essay.

Diagram: The Demarcation Spectrum

The Demarcation Spectrum

Type: interactive sim-id: demarcation-spectrum
Library: p5.js
Status: Specified

Bloom Level: Evaluate (L5) Bloom Verb: Assess Learning Objective: Assess where various knowledge claims fall on the spectrum from well-established science to pseudoscience using multiple demarcation criteria.

Instructional Rationale: Rather than presenting science/pseudoscience as a binary, a spectrum visualization helps students appreciate the gray areas and apply multiple criteria simultaneously.

Visual elements: - A horizontal spectrum bar from "Well-established Science" (left) to "Pseudoscience" (right) with intermediate zones - Draggable cards representing various claims (e.g., "General relativity," "Astrology," "Acupuncture," "String theory," "Homeopathy," "Evolutionary psychology") - A criteria checklist panel that evaluates each claim against 5 criteria: falsifiability, peer review, replication, self-correction, consistency

Interactive controls: - Drag claim cards to position them on the spectrum - Click a card to see the criteria checklist with checkmarks and explanations - A "Check My Answers" button that reveals expert positioning with explanations - Reset button to try again

Default state: Cards in a stack, spectrum empty, ready for student interaction.

Color scheme: Science end = teal, Pseudoscience end = coral, intermediate = amber gradient

Responsive design: Canvas resizes to fit container width. Cards stack vertically on narrow screens.

Implementation: p5.js with drag-and-drop using mousePressed/mouseReleased and createButton()

Science Denial and the Misuse of Skepticism

While scientific skepticism is essential to the health of science, it can be distorted into something very different: science denial. Science denial occurs when individuals or groups reject well-established scientific findings not because of genuine evidence or methodological concerns, but because the findings conflict with their ideological, economic, political, or personal commitments.

As you explored in Chapter 8, there is a critical difference between healthy skepticism and denialism. Healthy skepticism says: "Show me the evidence, and I will follow it wherever it leads." Denialism says: "I have already decided what I believe, and I will reject or reinterpret any evidence that contradicts it."

Science denial typically employs recognizable rhetorical strategies:

  • Cherry-picking: Selecting only the evidence that supports the desired conclusion while ignoring the broader body of evidence
  • Fake experts: Citing individuals with apparent credentials but no relevant expertise
  • Moving the goalposts: Continually demanding more evidence while never specifying what would be sufficient
  • Conspiracy theories: Claiming that the scientific community is engaged in a coordinated deception
  • Impossible expectations: Demanding absolute certainty before accepting any conclusion

Climate change denial, vaccine hesitancy, and evolution denial all employ these strategies. In each case, the overwhelming scientific consensus — built on thousands of peer-reviewed studies, multiple independent lines of evidence, and decades of replication — is rejected not on evidential grounds but on ideological ones.

You've Got This!

Sofia encouraging Navigating the difference between healthy skepticism and science denial can be genuinely difficult — especially when sophisticated arguments are used to cast doubt on well-established science. The key is to look at the process, not just the conclusion. Is the person genuinely engaging with the evidence and willing to change their mind? Or are they working backward from a predetermined conclusion? You're thinking like an epistemologist when you ask that question!

Putting It All Together: The Structure of Scientific Knowledge

The concepts in this chapter form an interconnected system. The scientific method provides the framework. Hypothesis testing, controlled experiments, and quantitative and qualitative methods provide the tools. Falsifiability sets the criterion for what counts as scientific. Replication and peer review provide quality control. Theories, laws, and models represent different kinds of scientific knowledge. And Kuhn's framework of paradigms, normal science, anomalies, and scientific revolutions reveals how scientific knowledge changes over time.

Together, these elements create a knowledge-production system that is remarkably powerful — but also fallible. The replication crisis reminds us that even peer-reviewed science can be wrong. The demarcation problem reminds us that the boundary between science and non-science is not always sharp. And the history of paradigm shifts reminds us that today's best theories may one day be superseded.

This is not a weakness. It is science's greatest strength. Unlike systems of knowledge that claim absolute certainty, science builds self-correction into its very structure. It expects to be wrong sometimes — and it has systematic procedures for catching and correcting its own errors.

Stage of Scientific Knowledge Key Process What It Produces
Investigation Scientific method, experiments Data and observations
Evaluation Peer review, replication Validated findings
Organization Theory formation, modeling Explanatory frameworks
Revolution Anomaly accumulation, paradigm shift New paradigms
Protection Scientific skepticism, demarcation criteria Defense against pseudoscience and denial

Excellent Progress!

Sofia celebrating You've now explored how the natural sciences produce, validate, and revise knowledge — from the everyday work of hypothesis testing to the dramatic upheavals of scientific revolutions. You understand why falsifiability matters, how peer review and replication provide quality control, and how to distinguish genuine science from pseudoscience. You're thinking like an epistemologist! In the next chapter, we'll see how the human sciences and history face their own unique epistemological challenges.

Summary

This chapter covered 23 key concepts in the epistemology of natural science:

  1. Scientific Method — A family of systematic approaches for investigating the natural world through observation, hypothesis, testing, and revision
  2. Falsifiability — The criterion that a scientific claim must be capable, in principle, of being proven wrong
  3. Hypothesis Testing — The process of designing investigations to evaluate whether evidence supports or undermines a hypothesis
  4. Replication — Repeating experiments to verify that results are reproducible
  5. Peer Review — Expert evaluation of research before publication
  6. Paradigm — A framework of assumptions, methods, and standards shared by a scientific community
  7. Theory and Law — Theories explain why; laws describe what happens (pattern)
  8. Scientific Consensus — General agreement among experts based on accumulated evidence
  9. Scientific Models — Simplified representations of complex systems used for understanding and prediction
  10. Normal Science — Routine, puzzle-solving research conducted within an established paradigm
  11. Anomalies — Observations that resist explanation within the current paradigm
  12. Replication Crisis — The finding that many published scientific results cannot be reproduced
  13. Paradigm Shift — A fundamental change in the assumptions and methods of a scientific discipline
  14. Thomas Kuhn — Philosopher and historian who introduced the concepts of paradigms and scientific revolutions
  15. Scientific Revolutions — Dramatic transitions in which one paradigm replaces another
  16. Controlled Experiments — Investigations that isolate variables to establish cause and effect
  17. Qualitative Methods — Collection and interpretation of non-numerical data
  18. Quantitative Methods — Collection and analysis of numerical data
  19. Scientific Skepticism — Proportioning belief to evidence and demanding adequate support for claims
  20. Pseudoscience — Claims presented as scientific but lacking the methods and standards of genuine science
  21. Demarcation Problem — The philosophical challenge of defining the boundary between science and non-science
  22. Unfalsifiable Claims — Statements structured so that no possible evidence could count against them
  23. Science Denial — Rejection of well-established science for ideological rather than evidential reasons

Prerequisites

This chapter builds on concepts from: