Skip to content

Critical Thinking and Practical Application

Summary

This chapter teaches the practical critical thinking skills needed to evaluate any bold technology claim, then applies them in capstone exercises. We develop hype detection checklists, red flag identification for technology claims, press release analysis techniques, base rate reasoning, Bayesian reasoning, and technology due diligence methods. We then apply these skills to real-world tasks: writing executive briefs, conducting red team analyses of company roadmaps, building claims trackers, evaluating pitch decks, writing critical reviews, performing portfolio allocation analysis, critiquing national quantum computing strategies, and formulating board-level questions. Students will leave this course equipped to evaluate any emerging technology claim with rigor and clear-eyed realism.

Concepts Covered

This chapter covers the following 24 concepts from the learning graph:

  1. Critical Thinking Skills
  2. Hype Detection Checklist
  3. Red Flags in Tech Claims
  4. How to Read a Press Release
  5. Science Journalism Problems
  6. Base Rate Reasoning
  7. Extraordinary Claims Rule
  8. Technology Due Diligence
  9. Risk Assessment Framework
  10. Bayesian Reasoning Basics
  11. Applying Skepticism Broadly
  12. Fusion Hype Comparison
  13. AGI Hype Comparison
  14. Autonomous Vehicle Comparison
  15. Writing an Executive Brief
  16. Conducting Red Team Analysis
  17. Building a Claims Tracker
  18. Evaluating a Pitch Deck
  19. Writing a Critical Review
  20. Portfolio Allocation Analysis
  21. National QC Strategy Critique
  22. Board-Level QC Questions
  23. Skeptical Inquiry Method
  24. Making Better Tech Bets

Prerequisites

This chapter builds on concepts from:


Fermi Welcomes You!

Fermi welcomes you Welcome, fellow investigators, to our final chapter! We have spent sixteen chapters building an evidence-based framework for evaluating quantum computing claims. Now we distill everything into portable, reusable critical thinking tools that you can apply to any bold technology claim — quantum computing, fusion energy, AGI, autonomous vehicles, or the next thing that hasn't been invented yet. This is where skeptical inquiry becomes a practical superpower. But does the math check out? One last time — let's find out!

Learning Objectives

After completing this chapter, you will be able to:

  • Apply a structured hype detection checklist to any technology claim
  • Identify red flags in press releases, pitch decks, and policy documents
  • Use base rate reasoning and Bayesian updating to evaluate extraordinary claims
  • Conduct technology due diligence using a systematic risk assessment framework
  • Write executive briefs, critical reviews, and board-level questions about emerging technologies
  • Perform red team analysis of company roadmaps and national technology strategies
  • Build and maintain a claims tracker for accountability
  • Construct evidence-based portfolio allocation analyses for technology investments

Critical Thinking Skills: The Toolkit

Critical thinking is not a personality trait — it is a learnable set of cognitive tools. Throughout this course, you have been developing these tools implicitly. This section makes them explicit and portable.

The critical thinking toolkit for technology evaluation consists of six core skills:

Skill Definition Key Question It Answers
Evidence evaluation Assessing the quality, relevance, and sufficiency of supporting evidence "Is this claim supported by strong evidence?"
Logical analysis Identifying logical fallacies, circular reasoning, and unsupported inferences "Does the conclusion follow from the premises?"
Quantitative reasoning Applying probability, statistics, and base rates to evaluate claims "What do the numbers actually say?"
Perspective taking Considering who benefits from the claim and what they might omit "Who is saying this, and why?"
Historical comparison Using reference classes and analogies to contextualize claims "What happened when similar claims were made before?"
Synthesis Integrating multiple evidence sources into a coherent assessment "What does the full picture look like?"

These skills are not quantum-computing-specific. They apply identically to evaluating claims about fusion energy, artificial general intelligence, cryptocurrency, gene therapy, or any technology where hype may outpace reality.

The Hype Detection Checklist

The hype detection checklist is a practical screening tool for evaluating technology claims. It consists of 12 yes/no questions, where each "yes" answer increases the probability that a claim is overhyped.

# Question If Yes → Hype Signal
1 Does the claim use vague timelines ("within a decade," "in the near future")? Avoids accountability
2 Does it promise to solve problems across many unrelated domains? Extraordinary breadth claim
3 Are specific, falsifiable predictions absent? Cannot be proven wrong
4 Does the claim rely on "just engineering" to dismiss fundamental barriers? Minimizes physics constraints
5 Is the evidence primarily demonstrations rather than commercial products? No market validation
6 Are metrics presented without classical baselines for comparison? Cannot assess advantage
7 Does the company/lab generating the claim also benefit financially from it? Conflict of interest
8 Are skeptics dismissed as "uninformed" rather than engaged with evidence? Suppressed balancing loop
9 Has the technology failed to meet its own past timeline predictions? Track record of missed targets
10 Is the claimed market size based on "if it works" rather than demonstrated demand? Hypothetical market
11 Are comparisons drawn to historically successful technologies (transistor, internet) without structural evidence? Appeal to false analogy
12 Does investment continue to grow despite absence of commercial milestones? Disconnected from results

Scoring: Count the "yes" answers.

  • 0-3: Claim may be legitimate; investigate further
  • 4-6: Significant hype risk; demand specific evidence
  • 7-9: Likely overhyped; apply deep skepticism
  • 10-12: Almost certainly overhyped; treat as speculative at best

Applied to quantum computing, the score is typically 10-12 out of 12. Most claims about quantum computing's commercial potential trigger every item on this checklist.

Red Flags in Technology Claims

Beyond the checklist, experienced technology evaluators learn to recognize specific red flags — patterns of language and behavior that correlate with overhyped or fraudulent claims.

Language red flags:

  • "Quantum advantage" or "quantum supremacy" without specifying over what classical method
  • "Exponential speedup" without naming the specific problem and comparing to the best classical algorithm
  • Revenue projections that conflate quantum sensing, QKD, and quantum computing under "quantum technology"
  • Roadmaps that show linear progress toward a discontinuous goal
  • "We're at an inflection point" repeated annually for a decade

Behavioral red flags:

  • Company claims breakthrough, then delays peer-reviewed publication
  • Announcements timed to funding rounds or stock option vesting dates
  • Refusal to allow independent benchmarking against classical alternatives
  • Executive team backgrounds heavy in marketing/finance, light in physics/engineering
  • Pivot from "general-purpose quantum computer" to "quantum-inspired classical algorithms" without acknowledging the change

Structural red flags:

  • No paying customers for the core technology (computation)
  • Revenue comes from consulting, education, or cloud access fees rather than computation results
  • Government contracts awarded on "strategic importance" rather than demonstrated capability
  • Partnerships announced with logos but no disclosed deliverables or success criteria

Fermi's Tip

Fermi shares a tip Red flags do not prove a claim is false — they indicate elevated risk. A single red flag might be coincidental. Three or four together should trigger deep investigation. Seven or more should trigger the assumption that the claim is unreliable until proven otherwise. Think of red flags as a Bayesian prior: each one shifts the probability toward "overhyped."

How to Read a Press Release

Press releases are marketing documents, not scientific communications. They are written to maximize positive coverage and minimize critical examination. Reading a press release critically requires a specific decoding technique.

Step 1: Identify what is claimed. Strip away adjectives and qualifiers. What specific, testable claim is being made? "We achieved quantum advantage on a commercially relevant problem" contains a testable claim. "We are making exciting progress toward the quantum future" does not.

Step 2: Identify what is omitted. The most important information in a press release is what it does not say. Look for missing elements:

  • Classical baseline comparison (what is the best classical solution?)
  • Error rates achieved (not just qubit count)
  • Cost per operation (not just "access to our quantum cloud")
  • Timeline accuracy (did they meet their previous predictions?)
  • Independent verification (has anyone outside the company confirmed the result?)

Step 3: Follow the money. Who funded the research? Who benefits from the claim? Is the press release timed to a funding round, IPO, or government budget cycle?

Step 4: Check the track record. Find the company's press releases from 2, 5, and 10 years ago. Were their previous predictions accurate? If they predicted useful quantum computation by 2023 in 2018, and it didn't happen, their current predictions carry less weight.

Science Journalism Problems

Science journalism serves as the primary channel through which technology claims reach the public, investors, and policymakers. Unfortunately, structural problems in science journalism systematically amplify hype.

Problem Mechanism Effect on QC Coverage
Incentive misalignment Exciting stories get clicks; cautious stories don't Breathless "breakthrough" coverage dominates
Source dependence Journalists rely on researchers for quotes Sources have financial stake in positive coverage
Technical illiteracy Most journalists lack physics background Cannot evaluate claims independently
False balance "Both sides" journalism treats skeptics as equivalent to proponents Implicitly frames skepticism as minority view
Headline pressure Editors demand attention-grabbing headlines Nuanced findings become "Scientists achieve quantum breakthrough!"
Speed pressure First-to-publish incentives override verification Claims amplified before peer review

The net effect is a systematic bias toward overstating quantum computing progress. A 2% improvement in gate fidelity becomes "Major quantum computing breakthrough." A laboratory demonstration on a contrived problem becomes "Quantum computers outperform classical supercomputers." The public, investors, and policymakers consume these headlines without the context needed to evaluate them.

The antidote is to read primary sources: the actual peer-reviewed paper (not the press release about the paper), the supplementary materials where limitations are disclosed, and the work of independent scientists who have attempted to reproduce the result.

Base Rate Reasoning

Base rate reasoning is one of the most powerful and underused critical thinking tools. It asks: "Before considering this specific case, what is the historical success rate for similar cases?"

For technology investment claims:

\[ P(\text{success} \mid \text{claim}) = \frac{P(\text{claim} \mid \text{success}) \times P(\text{success})}{P(\text{claim})} \]

The base rate\(P(\text{success})\) — is the historical frequency with which bold technology claims actually pan out. Research on technology forecasting suggests:

  • Transformative technology claims (will change civilization): ~2-5% accuracy over 20 years
  • Specific commercial milestone claims (product by year X): ~10-20% accuracy
  • Order-of-magnitude improvement claims (100x better within a decade): ~5-15% accuracy
  • "Just engineering" claims for fundamental physics problems: ~1-5% accuracy

Applied to quantum computing, the relevant base rate is the success rate of physics-based technologies that require discontinuous breakthroughs before any commercial value emerges. As Chapter 15 established, this base rate is very low — perhaps 5-10% at best for each individual breakthrough, compounding to far less for the ten simultaneous breakthroughs required.

Key Insight

Fermi is thinking Base rate neglect is one of the most common cognitive errors in technology investment. When an investor hears "quantum computing will transform drug discovery," they evaluate the specific claim without asking: "What percentage of technologies that promised to transform drug discovery actually did?" The answer — historically very low — should anchor the analysis before any company-specific evidence is considered.

Extraordinary Claims and Bayesian Reasoning

Carl Sagan's principle — "extraordinary claims require extraordinary evidence" — is not just a rhetorical device. It is a consequence of Bayesian reasoning.

In Bayesian terms, an extraordinary claim has a very low prior probability. To move the posterior probability to a level that justifies significant investment, the evidence must be correspondingly strong — strong enough to overcome the low prior.

\[ P(\text{viable} \mid \text{evidence}) = \frac{P(\text{evidence} \mid \text{viable}) \times P(\text{viable})}{P(\text{evidence})} \]

If the prior \(P(\text{viable})\) is 0.001 (based on joint probability analysis from Chapter 16), then even very impressive evidence (likelihood ratio of 100:1) only produces:

\[ P(\text{viable} \mid \text{evidence}) = \frac{100 \times 0.001}{100 \times 0.001 + 1 \times 0.999} \approx 0.091 = 9.1\% \]

This means that even if a quantum computing company demonstrates something 100 times more consistent with viability than with failure, the posterior probability of commercial viability only reaches about 9% — because the prior is so low. To reach a 50% probability of viability, the evidence would need to be roughly 1,000 times more consistent with success than with failure.

The practical implication: individual demonstrations, no matter how impressive, cannot overcome the low prior established by fundamental physics analysis. Only a sustained, cumulative pattern of evidence across all ten breakthrough dimensions could shift the probability meaningfully.

Technology Due Diligence Framework

Technology due diligence is the systematic process of evaluating a technology claim before making an investment or policy decision. The framework below integrates all the tools developed in this course.

Phase 1: Claim Analysis (1-2 hours)

  • Apply the Hype Detection Checklist (score 0-12)
  • Identify red flags in public communications
  • Decode the company's press releases using the four-step method
  • Check the company's prediction track record (Chapter 10)

Phase 2: Technical Assessment (1-2 days)

  • Evaluate Technology Readiness Level (Chapter 10)
  • Assess which of the 10 required breakthroughs have been demonstrated (Chapter 16)
  • Compare claimed performance to best classical alternatives (Chapter 2)
  • Evaluate the continuous improvement pathway (Chapter 15)

Phase 3: Financial Analysis (1-2 days)

  • Calculate expected value using probability framework (Chapter 8)
  • Assess risk-adjusted returns vs. alternatives (Chapter 14)
  • Evaluate the company's revenue sources (computation vs. consulting/grants)
  • Compare to historical reference class (Chapter 15)

Phase 4: Systemic Analysis (1 day)

  • Identify which reinforcing loops are active (Chapter 13)
  • Assess cognitive biases that may affect your own judgment (Chapter 11)
  • Apply the "anonymous claim test" (Chapter 15)
  • Consider opportunity cost of this investment vs. alternatives (Chapter 14)

Phase 5: Synthesis and Recommendation (half day)

  • Integrate all findings into a written assessment
  • Assign an overall confidence level (high/medium/low/very low)
  • Recommend allocation level based on risk-adjusted analysis
  • Identify specific milestones that would change the assessment

Applying Skepticism Broadly: Three Comparisons

The critical thinking framework developed for quantum computing applies directly to other technology domains experiencing similar hype dynamics.

Fusion Energy Hype Comparison

Fusion energy shares quantum computing's structural profile: validated underlying physics, enormous investment, repeated timeline failures, and the "always 30 years away" pattern.

Feature Quantum Computing Fusion Energy
Physics validated Yes (small scale) Yes (small scale, NIF 2022)
Commercial product No No
Timeline accuracy Consistently missed Consistently missed ("30 years away" since 1960)
"Just engineering" claims Yes Yes
Discontinuous breakthrough needed Yes (error correction) Yes (sustained net energy gain)
Hype Detection Checklist score 10-12/12 9-11/12

AGI Hype Comparison

Artificial general intelligence (AGI) claims have surged since 2023, with some companies predicting human-level AI "within 5 years." The hype patterns mirror quantum computing.

Feature Quantum Computing AGI
Vague timelines "Within a decade" "Within 5 years"
Moving goalposts "Quantum supremacy" redefined "AGI" redefined repeatedly
Conflated metrics Qubit count ≠ useful computation Benchmark scores ≠ general intelligence
Financial incentives Companies valued on promises Companies valued on promises
Expert skepticism suppressed Physicists marginalized AI safety researchers marginalized
Hype Detection Checklist score 10-12/12 8-10/12

Autonomous Vehicle Comparison

Self-driving cars provide a partially resolved case study: the hype cycle peaked around 2017-2019, predictions of "full autonomy by 2020" failed, and the industry has quietly retreated to geofenced, limited deployments.

Feature Quantum Computing Autonomous Vehicles
Original timeline promise "Useful by 2020" (made in 2015) "Full autonomy by 2020" (made in 2016)
Actual outcome by target date No useful computation No full autonomy
Response to missed timeline Move to 2030, increase investment Retreat to limited geofencing
Correction mechanism Not yet active Partially active (visible accidents, regulation)
Hype Detection Checklist score 10-12/12 7-9/12 (declining as reality sets in)

The autonomous vehicle case is instructive because it shows what a hype correction looks like in practice: timelines are quietly extended, ambitions are scaled back, and the narrative shifts from "will transform everything" to "useful in specific limited contexts."

Bias Alert

Fermi warns you When you see the same hype pattern across multiple technologies — vague timelines, dismissed skeptics, moving goalposts, investment disconnected from results — this is not coincidence. It is a systemic feature of how the technology-investment-media ecosystem operates. The critical thinker's job is to recognize the pattern and apply the same rigorous framework regardless of whether the topic is quantum computing, fusion, AGI, or the next hyped technology.

The Skeptical Inquiry Method

The complete skeptical inquiry method synthesizes all the tools from this course into a repeatable process for evaluating any bold technology claim:

  1. Define the claim precisely. Strip marketing language. What specific, testable prediction is being made?
  2. Establish the base rate. What is the historical success rate for similar claims?
  3. Apply the hype detection checklist. Score 0-12.
  4. Identify red flags. Language, behavioral, and structural.
  5. Assess the evidence quality. Peer-reviewed? Independently replicated? Commercially validated?
  6. Apply Bayesian reasoning. Given the prior (base rate) and the evidence, what is the posterior probability?
  7. Check for cognitive biases. Which biases might be affecting your own judgment? (Confirmation bias if you want the technology to work; authority bias if a respected figure endorses it)
  8. Evaluate systemic dynamics. Which feedback loops are active? Is the balancing loop suppressed?
  9. Compare to historical reference class. Which past technologies does this most resemble — successes or failures?
  10. Synthesize. Integrate all findings into an evidence-based assessment.

Diagram: Skeptical Inquiry Method Flowchart

Skeptical Inquiry Method Interactive Flowchart

Type: workflow sim-id: skeptical-inquiry-flowchart
Library: p5.js
Status: Specified

Bloom Taxonomy: Apply (L3) Bloom Verb: execute, implement, apply Learning Objective: Students will apply the ten-step skeptical inquiry method by walking through an interactive flowchart that guides them through each analytical step with prompts, examples, and decision points.

Instructional Rationale: An interactive flowchart is appropriate for the Apply objective because students must practice executing the method step by step, making decisions at each stage. A static diagram would be Remember-level; interactivity forces application.

Canvas layout: - Main area (75% width): Vertical flowchart with 10 numbered step boxes - Side panel (25% width): Context panel showing the current step's details, examples, and guidance

Flowchart elements: - 10 process boxes (rounded rectangles), vertically connected by arrows - Each box contains: step number, step name, and a one-line description - Color coding: - Steps 1-3 (Claim analysis): Indigo #3F51B5 - Steps 4-6 (Evidence assessment): Orange #FF7043 - Steps 7-8 (Bias/system check): Purple #7B1FA2 - Steps 9-10 (Synthesis): Green #388E3C

Interactive features: - Click any step box: side panel updates with: - Detailed instructions for that step - Example applied to quantum computing - Example applied to a second technology (fusion or AGI) - Key questions to ask at this step - Common mistakes at this step - "Walk Through" button: auto-advances through steps with 5-second pause on each, highlighting the active step - "Apply to Custom Claim" mode: user enters a technology claim in a text input; the side panel adapts prompts to reference the custom claim - Progress indicator at bottom: shows which steps have been visited

Decision diamond after Step 3: - If Hype Detection score ≥ 10: arrow to "HIGH RISK" callout box (red), then continues to Step 4 - If Hype Detection score 4-9: arrow to "MODERATE RISK" callout (yellow), then continues to Step 4 - If Hype Detection score 0-3: arrow to "LOW RISK" callout (green), then continues to Step 4

Final output (after Step 10): - Summary box: "Assessment: [High/Moderate/Low] confidence in claim viability" - "Recommended allocation: [percentage range] of portfolio"

Background: aliceblue Canvas: Responsive width, 550px height

Implementation: p5.js with clickable boxes, side panel rendering, auto-advance timer

Practical Application: Eight Capstone Exercises

The following eight exercises apply the critical thinking tools developed throughout this course. Each exercise corresponds to a real-world task that technology analysts, investors, and policymakers must perform.

Exercise 1: Writing an Executive Brief

An executive brief distills complex technical analysis into a 1-2 page document for senior decision-makers who lack technical background but must make resource allocation decisions.

Structure of a technology executive brief:

  1. Bottom line up front (BLUF): One sentence stating the conclusion
  2. Recommendation: Specific action with rationale (3-4 sentences)
  3. Key findings: 3-5 bullet points with the most important evidence
  4. Risk assessment: Probability of success, comparison to alternatives, downside exposure
  5. What would change our assessment: Specific, observable milestones

Sample BLUF for a QC Executive Brief

"Based on joint probability analysis of ten required technical breakthroughs, historical reference class comparison, and risk-adjusted return calculation, we recommend limiting quantum computing allocation to 5-8% of the technology portfolio, reallocating the remainder to quantum sensing (35%), classical AI hardware (30%), and diversified emerging technologies (27-30%)."

Exercise 2: Conducting Red Team Analysis

Red team analysis assigns a team to find every reason a plan or claim might fail. For quantum computing roadmaps, the red team identifies:

  • Which of the 10 breakthroughs the roadmap assumes without evidence
  • Which timeline predictions have already been missed
  • Which cost estimates are unrealistic
  • Which competitive threats (classical computing improvements) are ignored
  • Which cognitive biases might be affecting the roadmap authors

The red team's output is a structured critique document that decision-makers can weigh against the proponents' case.

Exercise 3: Building a Claims Tracker

A claims tracker is a database of specific, dated, falsifiable predictions made by quantum computing companies, researchers, and policymakers. It records:

Field Description Example
Date of claim When the prediction was made 2020-01-15
Source Who made the prediction CEO of Company X
Specific claim Exact, falsifiable prediction "Quantum advantage on a commercial problem by 2024"
Target date When the prediction should come true 2024-12-31
Status Met / Missed / Pending Missed
Evidence What actually happened No commercial quantum advantage demonstrated
Accountability Response from the source Quietly revised to 2028

Maintaining a claims tracker over time reveals patterns: which organizations consistently miss their predictions, by how much, and whether they acknowledge or quietly revise their failures. This historical record is the single most powerful antidote to the "moving goalpost" problem.

Exercise 4: Evaluating a Pitch Deck

Quantum computing startup pitch decks follow predictable patterns. Critical evaluation requires checking each slide against the analytical framework from this course:

  • "The Problem" slide: Is the problem real? Does it require a quantum solution? Can classical methods solve it?
  • "Our Solution" slide: Which of the 10 breakthroughs does it assume? At what TRL is the technology?
  • "Market Size" slide: Is the market based on demonstrated demand or "if it works" projections?
  • "Competitive Advantage" slide: Is the advantage quantum-specific, or could a classical company replicate it?
  • "Team" slide: Apply the charismatic founder risk assessment. What is the team's physics expertise vs. marketing expertise?
  • "Financial Projections" slide: Apply base rate reasoning. What percentage of companies with similar projections actually achieve them?
  • "Timeline" slide: Compare to the company's previous timelines. Have they met past milestones?

Exercise 5: Writing a Critical Review

A critical review evaluates a quantum computing paper, announcement, or policy document using structured analytical criteria. The review should address:

  1. What is claimed? (precise, testable formulation)
  2. What evidence supports the claim? (quality, quantity, independence)
  3. What evidence contradicts the claim? (including evidence the authors may have omitted)
  4. What assumptions are required? (explicit and implicit)
  5. How does the claim compare to the historical reference class? (Chapter 15)
  6. What cognitive biases might affect the reader's evaluation? (Chapter 11)
  7. What specific, observable outcome would falsify the claim?

Exercise 6: Portfolio Allocation Analysis

Building on Chapter 14's portfolio framework, a portfolio allocation analysis applies quantitative methods to construct an optimal technology investment portfolio. The analysis should include:

  • Expected return and risk for each technology category
  • Correlation matrix between categories
  • Efficient frontier calculation (or simplified version)
  • Recommended allocation with confidence intervals
  • Sensitivity analysis showing how the allocation changes under different probability assumptions

The key output is a specific allocation table with justification, similar to the one developed in Chapter 14 but customized to the investor's risk tolerance and time horizon.

Key Insight

Fermi is thinking The difference between amateur and professional technology evaluation is not knowledge of the technology — it is methodology. An amateur asks "Will quantum computing work?" and guesses. A professional applies the skeptical inquiry method, constructs a claims tracker, calculates joint probabilities, compares to reference classes, and arrives at a quantified probability with explicit assumptions. The methodology produces better decisions even when the analyst lacks deep physics knowledge, because it forces structured reasoning over intuition.

Exercise 7: National QC Strategy Critique

Governments worldwide have published national quantum computing strategies. Applying the analytical tools from this course, a strategy critique evaluates:

  • Goal alignment: Are the strategy's goals achievable, or do they assume breakthroughs that may never occur?
  • Budget allocation: What percentage goes to quantum computing vs. quantum sensing and other proven quantum technologies?
  • Timeline realism: Do the strategy's milestones match historical rates of progress?
  • Opportunity cost: What alternative uses of the same budget would produce better risk-adjusted returns?
  • Accountability mechanisms: Does the strategy include metrics for deciding when to reduce or redirect investment?
  • Geopolitical rationality: Is the strategy driven by evidence of competitive threat, or by the geopolitical arms race loop (Chapter 13)?

The strongest critiques propose specific, constructive alternatives rather than simply identifying problems. A national quantum strategy that reallocated 60% of its quantum computing budget to quantum sensing, post-quantum cryptography, and AI hardware — while maintaining 40% for fundamental quantum computing research — would likely produce far better returns for taxpayers.

Exercise 8: Board-Level Questions About Quantum Computing

Board members and senior executives need concise, penetrating questions to cut through hype. The following questions, derived from this course, should be asked of any company, investment, or policy involving quantum computing:

  1. "What is the specific, commercially relevant problem this technology solves better than the best classical alternative — and has this been independently verified?" (Chapters 2, 7)
  2. "What is the expected value of this investment using explicit probability estimates for each required breakthrough?" (Chapters 8, 16)
  3. "Which of the ten required breakthroughs have you demonstrated at commercially relevant scale — not at laboratory scale?" (Chapter 16)
  4. "What were your predictions for this year five years ago, and which did you meet?" (Chapters 4, 10)
  5. "If we redirect 60% of this budget to quantum sensing and classical AI, what would the risk-adjusted return comparison look like?" (Chapter 14)
  6. "What specific, observable milestone would cause you to recommend reducing investment?" (Chapter 10)
  7. "What percentage of your revenue comes from actual quantum computation vs. consulting, grants, and cloud access fees?" (Chapter 9)

These seven questions, asked consistently and with insistence on quantitative answers, are sufficient to expose the gap between quantum computing marketing and quantum computing reality.

Diagram: Hype Detection Checklist Tool

Interactive Hype Detection Checklist Tool

Type: microsim sim-id: hype-detection-tool
Library: p5.js
Status: Specified

Bloom Taxonomy: Evaluate (L5) Bloom Verb: assess, judge, critique Learning Objective: Students will assess any technology claim against the 12-item hype detection checklist by answering yes/no questions and receiving a scored risk assessment with actionable recommendations.

Instructional Rationale: An interactive checklist tool is appropriate for the Evaluate objective because students must make judgments on each criterion applied to a specific claim. The tool provides immediate feedback (score, risk level, recommendation), reinforcing the evaluation skill through practice.

Canvas layout: - Header (10% height): Title and technology name input field - Main area (70% height): Scrollable checklist with 12 yes/no questions - Footer (20% height): Score display, risk meter, and recommendation text

Interactive controls: - Text input at top: "Enter technology name:" (default: "Quantum Computing") - 12 checkbox rows, each containing: - Question number and text (from the Hype Detection Checklist table above) - Yes/No toggle button - When clicked, a brief explanation appears below the question in lighter text - "Apply to Quantum Computing" preset button: checks all 12 boxes - "Clear All" button: unchecks all boxes - "Apply to [Classical AI / Fusion / Blockchain]" preset buttons with realistic pre-checked patterns

Score display (footer): - Large numerical score: "Score: X / 12" - Visual risk meter (horizontal bar): - 0-3: Green zone, label "Low Hype Risk" - 4-6: Yellow zone, label "Moderate Hype Risk" - 7-9: Orange zone, label "High Hype Risk" - 10-12: Red zone, label "Very High Hype Risk" - Recommendation text that updates based on score: - 0-3: "This claim shows minimal hype indicators. Standard due diligence is appropriate." - 4-6: "This claim has significant hype markers. Demand specific, falsifiable evidence before investing." - 7-9: "This claim is likely overhyped. Apply deep skepticism and compare to historical reference class." - 10-12: "This claim triggers nearly all hype indicators. Treat as highly speculative. Limit allocation to what you can afford to lose entirely."

Visual feedback: - Each checked "Yes" box turns orange; "No" stays green - The risk meter animates smoothly as boxes are checked/unchecked - Confetti animation if score = 0 (a genuinely non-hyped technology is cause for celebration)

Background: aliceblue Canvas: Responsive width, 550px height

Implementation: p5.js with checkbox rendering, preset data arrays, animated meter, text rendering

Making Better Technology Bets

The final skill this course teaches is not how to avoid all technology investment — it is how to make better technology bets. Skepticism is not the same as cynicism. A well-calibrated skeptic does not reject all claims; they allocate their confidence — and their capital — in proportion to the evidence.

The principles for making better technology bets are:

  1. Demand evidence proportional to the claim. Small claims need modest evidence. Revolutionary claims need revolutionary evidence. Quantum computing's claims are extraordinary; the evidence remains ordinary.

  2. Track predictions, not promises. Build and maintain claims trackers. Judge technologies by their track record, not their roadmap.

  3. Prefer continuous improvement pathways. Technologies that generate revenue at every stage of development are safer bets than those requiring discontinuous breakthroughs.

  4. Diversify across the probability spectrum. Allocate most capital to high-probability, moderate-return technologies. Allocate a small amount to low-probability, high-return moonshots. Never let moonshots dominate the portfolio.

  5. Account for opportunity cost. Every dollar invested in quantum computing is a dollar not invested in quantum sensing, AI hardware, or other proven technologies. Compare, always compare.

  6. Update continuously. Bayesian reasoning demands that you update your beliefs as new evidence arrives — in both directions. If quantum computing achieves a genuine breakthrough, increase your allocation. If it misses another milestone, decrease it.

  7. Separate the technology from the narrative. Technologies succeed or fail based on physics and economics, not on the eloquence of their advocates. Apply the anonymous claim test relentlessly.

Diagram: Technology Bet Decision Framework

Technology Bet Decision Framework

Type: workflow sim-id: tech-bet-decision-framework
Library: p5.js
Status: Specified

Bloom Taxonomy: Create (L6) Bloom Verb: design, formulate, construct Learning Objective: Students will design their own technology evaluation process by navigating an interactive decision framework that integrates all analytical tools from the course, formulating a structured investment recommendation.

Instructional Rationale: A decision framework tool is appropriate for the Create objective because students must synthesize all course tools into a coherent evaluation process and produce an original recommendation. The interactive format forces active construction rather than passive recall.

Canvas layout: - Center (80% width): Decision tree flowchart with branching paths - Right panel (20% width): Running score and recommendation accumulator

Flowchart structure: - Entry node: "Evaluate Technology Claim" - Decision 1: "Hype Detection Score?" → branches: 0-3 / 4-6 / 7-9 / 10-12 - Decision 2: "TRL Level?" → branches: 1-3 / 4-6 / 7-9 - Decision 3: "Has continuous improvement pathway?" → Yes / No - Decision 4: "Intermediate commercial products?" → Yes / No - Decision 5: "Joint probability of required breakthroughs?" → >10% / 1-10% / <1% - Decision 6: "Risk-adjusted return vs. alternatives?" → Superior / Comparable / Inferior

Terminal nodes (color-coded): - "Strong Investment Case" (green): High allocation recommended (20-40%) - "Moderate Case" (yellow): Moderate allocation (10-20%) - "Speculative Case" (orange): Small allocation (5-10%) - "Avoid" (red): Minimal or zero allocation (0-5%)

Interactive features: - Click each decision node: presents the question with guidance on how to answer - After answering, the selected branch highlights and the chart advances to the next decision - Right panel accumulates: "Evidence strength: X/10", "Risk level: [Low/Medium/High/Very High]", "Suggested allocation: X-Y%" - "Apply to Quantum Computing" walkthrough: auto-navigates using the course's analysis - "Start Fresh" button: resets for a new technology - At terminal node: display full recommendation summary with citations to relevant chapters

Color scheme: Indigo decisions, orange branches, green/yellow/red terminal nodes Background: aliceblue Canvas: Responsive width, 500px height

Implementation: p5.js with interactive flowchart nodes, click handlers, state machine, panel rendering

Course Conclusion

This course has equipped you with something more valuable than knowledge about quantum computing: a methodology for evaluating any bold technology claim. The specific facts about qubit error rates and coherence times will become outdated. The analytical framework — base rate reasoning, joint probability analysis, systems thinking, hype detection, reference class comparison, and Bayesian updating — will remain useful for the rest of your career.

The quantum computing industry asks you to believe extraordinary claims on the basis of ordinary evidence, to mistake announcements for achievements, and to accept "just engineering" as an answer to fundamental physics problems. You now have the tools to decline that request — politely, rigorously, and with the numbers to back it up.

Excellent Investigative Work!

Fermi celebrates Congratulations, fellow investigator — you've completed the course! You now possess a complete critical thinking toolkit for evaluating technology claims: hype detection checklists, red flag identification, Bayesian reasoning, systems analysis, historical comparison, and quantitative risk assessment. These tools work on quantum computing, fusion energy, AGI, autonomous vehicles, and whatever comes next. The most important thing Fermi can tell you is this: never stop asking "But does the math check out?" Because if it doesn't, nothing else matters. Outstanding work — go forth and investigate!

Review Questions

Question 1: Apply the 12-item hype detection checklist to a non-quantum technology of your choice. What score does it receive, and what does this imply about investment risk?

Answers will vary by chosen technology. A strong answer selects a specific technology (e.g., fusion energy, brain-computer interfaces, blockchain), evaluates each of the 12 questions with specific evidence (not just yes/no but why), arrives at a numerical score, and interprets the score using the risk categories (0-3 low, 4-6 moderate, 7-9 high, 10-12 very high). For example, fusion energy might score 9-10/12: vague timelines (yes — "30 years away"), promises across domains (yes — energy, desalination, space propulsion), no falsifiable predictions (mostly yes), "just engineering" claims (yes), no commercial products (yes), metrics without classical baselines (yes — "net energy gain" without economic context), financial conflicts (yes), missed timelines (yes — consistently for 60 years).

Question 2: Explain how base rate reasoning should change an investor's assessment of a quantum computing company's claim that it will achieve 'quantum advantage on a commercial problem by 2028.'

Without base rate reasoning, an investor might evaluate this claim based solely on the company's technology and team, concluding (perhaps) that it seems plausible. With base rate reasoning, the investor first asks: "What is the historical success rate for technology companies that claimed they would achieve a breakthrough by a specific date?" Research suggests this rate is approximately 10-20% for specific commercial milestones, and lower (5-10%) for milestones requiring fundamental technical breakthroughs. Additionally, this specific company (and the field generally) has a track record of missing previous timeline predictions. The base rate therefore establishes a prior of roughly 5-15% before any company-specific evidence is considered. The company's impressive technology and team can shift this prior upward, but not by as much as the investor's intuition suggests — perhaps to 15-25%. This is dramatically lower than the implicit >50% probability that an uninformed investor might assign based on the company's confident presentation.

Question 3: You are advising a corporate board that has been presented with a proposal to invest $50 million in quantum computing. Using the seven board-level questions from this chapter, draft two questions you would prioritize and explain why.

Priority Question 1: "What is the expected value of this investment using explicit probability estimates for each required breakthrough?" This forces the proponents to quantify their assumptions rather than relying on qualitative optimism. When the board sees that the expected value is negative (as the joint probability analysis from Chapter 16 demonstrates), the conversation shifts from "should we invest?" to "how much can we afford to lose?" Priority Question 2: "If we redirect 60% of this budget to quantum sensing and classical AI, what would the risk-adjusted return comparison look like?" This introduces the opportunity cost framework, forcing the board to compare the proposed investment against its best alternative rather than evaluating it in isolation. These two questions are prioritized because they produce quantitative, comparable outputs that enable evidence-based decision-making. The other questions are valuable but produce qualitative insights that are easier for proponents to deflect.

Question 4: Describe three specific ways that science journalism structural problems amplify quantum computing hype, and propose one concrete reform that could mitigate the problem.

Three structural problems: (1) Incentive misalignment: Exciting "breakthrough" stories generate clicks and advertising revenue, while cautious "technology remains far from commercial viability" stories do not. This creates a selection effect where only the most optimistic framing of quantum computing developments gets published. (2) Source dependence: Journalists rely on quantum computing researchers and company representatives as primary sources, creating a feedback loop where financially interested parties control the narrative. Independent physicists who are skeptical rarely provide quotable "exciting" soundbites. (3) Technical illiteracy: Most science journalists lack the physics background to independently evaluate claims about qubit counts, error rates, or quantum advantage, so they report claims at face value rather than contextualizing them against the full set of required breakthroughs. Proposed reform: Require major science publications to include a "Claims Context Box" with every quantum computing story, containing: (a) the company's previous timeline predictions and whether they were met, (b) the best classical performance on the same problem, and (c) how many of the 10 required breakthroughs this advance addresses. This low-cost editorial requirement would dramatically improve reader calibration.

Question 5: Using the Bayesian reasoning framework, calculate how strong evidence would need to be to shift the probability of quantum computing viability from 0.1% (the joint probability estimate) to 50%. What does this imply about what proponents would need to demonstrate?

Starting from Bayes' theorem: \(P(\text{viable} \mid E) = \frac{L \times P(\text{viable})}{L \times P(\text{viable}) + P(\text{not viable})}\) where \(L\) is the likelihood ratio (how much more likely the evidence is under viability than under non-viability). Setting \(P(\text{viable} \mid E) = 0.50\), \(P(\text{viable}) = 0.001\), and \(P(\text{not viable}) = 0.999\): \(0.50 = \frac{L \times 0.001}{L \times 0.001 + 0.999}\). Solving: \(L \times 0.001 = 0.999\), so \(L = 999\). The evidence would need to be approximately 1,000 times more consistent with viability than with non-viability. This means proponents would need to demonstrate something that is essentially impossible to explain unless quantum computing is commercially viable — for example, actually solving a commercially relevant problem faster and cheaper than the best classical alternative, with independent verification. Individual laboratory demonstrations, qubit count milestones, or theoretical algorithm improvements do not approach this evidence threshold, because they are all consistent with the non-viable hypothesis (the technology advances incrementally but never crosses the viability threshold).