Skip to content

Historical Parallels and Lessons

Summary

This chapter compares quantum computing investment to other physics-based technology bets throughout history, both successful and failed. We examine successful investments — transistors, lasers, fiber optics, GPS atomic clocks, and MRI — to understand what made these technologies commercially viable. We then examine failures — cold fusion, Theranos (as an information asymmetry case study), and the Concorde — to identify the warning signs that distinguish genuine breakthroughs from speculative dead ends. Students will understand the role of charismatic founders, information asymmetry, and opportunity cost in evaluating technology investments.

Concepts Covered

This chapter covers the following 12 concepts from the learning graph:

  1. Successful Physics Bets
  2. Transistor Investment Payoff
  3. Laser Investment Payoff
  4. Fiber Optics Payoff
  5. GPS Atomic Clock Payoff
  6. MRI Investment Payoff
  7. Failed Physics Bets
  8. Cold Fusion Losses
  9. Theranos Lessons
  10. Concorde Economics Failure
  11. Charismatic Founder Risk
  12. When Scientists Mislead

Prerequisites

This chapter builds on concepts from:


Fermi Welcomes You!

Fermi welcomes you Welcome, fellow investigators! In this chapter we leave the quantum computing debate temporarily and look backward — at historical physics-based technology bets that succeeded brilliantly and others that failed spectacularly. History doesn't repeat exactly, but the structural patterns of success and failure are remarkably consistent. By understanding what made transistors succeed and cold fusion fail, we can evaluate quantum computing against a reference class of comparable technologies. But does the math check out? Let's find out!

Learning Objectives

After completing this chapter, you will be able to:

  • Identify common structural features of physics-based technologies that achieved commercial viability
  • Analyze why specific technology investments (transistors, lasers, fiber optics, GPS, MRI) succeeded
  • Explain why cold fusion, Theranos, and the Concorde failed as commercial technology ventures
  • Recognize the "charismatic founder" risk pattern in technology investment
  • Distinguish between information asymmetry and genuine scientific uncertainty
  • Apply historical reference class analysis to evaluate quantum computing's commercial prospects

The Reference Class Problem

Before examining individual cases, we need to address a methodological question: what is the correct reference class for quantum computing? Proponents compare quantum computing to the transistor — a physics breakthrough that transformed civilization. Skeptics compare it to cold fusion — a physics claim that was never reproduced. The comparison you choose dramatically frames the analysis.

The honest answer is that quantum computing shares features with both categories. The underlying physics is real (unlike cold fusion), but the engineering path to commercial viability is unclear (unlike the transistor). Our task in this chapter is to identify specific, testable features that distinguish the two categories and then evaluate where quantum computing falls.

Feature Successful Technologies Failed Technologies Quantum Computing
Underlying physics validated Yes No or disputed Yes (small scale)
Laboratory demonstration Yes, early Unreproducible or marginal Yes (limited)
Continuous improvement pathway Yes No No — requires discontinuous breakthrough
Commercial prototype within 10 years of discovery Yes (most cases) No No (after 40+ years)
Intermediate commercial products Yes No No
Scaling follows known engineering Yes Not applicable Unknown — may be physically impossible

This table previews the central finding of this chapter: quantum computing's structural profile more closely resembles the failed technologies than the successful ones, primarily because it lacks a continuous improvement pathway and has produced no intermediate commercial products after four decades of research.

Successful Physics Bets

The Transistor: From Lab to Civilization in Two Decades

The transistor, invented at Bell Labs in 1947, is the canonical example of a physics-based technology investment that transformed the world. Within 20 years of its invention, transistors had replaced vacuum tubes in virtually every electronic application and had enabled the computer revolution.

What made the transistor succeed?

  • Immediate practical advantage: Even the first crude transistors were smaller, more reliable, and consumed less power than vacuum tubes
  • Continuous improvement: Each generation of transistors was better than the last — smaller, faster, cheaper — without requiring any fundamental breakthrough beyond the original discovery
  • Scalable manufacturing: Semiconductor fabrication techniques allowed mass production at decreasing cost per unit (Moore's Law)
  • Intermediate products: Transistor radios (1954), integrated circuits (1958), and calculators (1960s) generated revenue that funded further development
  • Clear customer demand: The military, telecommunications, and computing industries immediately recognized transistor applications

The transistor's timeline from invention to commercial dominance was remarkably short:

Year Milestone Time from Invention
1947 Point-contact transistor demonstrated 0 years
1948 Junction transistor invented 1 year
1954 First transistor radio (Regency TR-1) sold commercially 7 years
1958 Integrated circuit invented 11 years
1965 Moore's Law articulated based on observed trends 18 years
1971 First microprocessor (Intel 4004) 24 years

By the time the transistor was 24 years old, it had produced the microprocessor. Quantum computing is over 40 years old (dating from Feynman's 1982 proposal) and has not produced a single commercial product.

Lasers: A Solution Seeking Problems — That Found Them

The laser, first demonstrated in 1960, was famously described as "a solution looking for a problem." Initially, no one knew what lasers would be useful for. This parallel is frequently cited by quantum computing proponents: "We don't know what QC will be used for either, and look how lasers turned out."

The parallel is instructive — but not in the way proponents intend. The laser did find applications, but it did so quickly and through a continuous improvement pathway:

  • 1960: First ruby laser demonstrated
  • 1961: First medical applications (eye surgery)
  • 1963: Semiconductor laser invented (foundation for fiber optics and CD players)
  • 1966: Lasers used in industrial cutting and welding
  • 1969: Laser used to measure Earth-Moon distance (Apollo reflectors)
  • 1974: Barcode scanners using HeNe lasers deployed in retail
  • 1982: Compact disc player (semiconductor laser) becomes consumer product

Within 15 years of invention, lasers were generating commercial revenue in medicine, manufacturing, telecommunications, and consumer electronics. The "solution looking for a problem" found dozens of problems — quickly — because the fundamental technology worked and could be incrementally improved.

Key Insight

Fermi is thinking The laser analogy actually undermines the quantum computing case. Lasers found commercial applications within 5-15 years because they worked at small scale and could be incrementally improved. Quantum computing has been seeking applications for 40+ years and has found zero commercial uses — not because we haven't looked, but because the technology doesn't yet work at a scale where any application becomes useful. The correct analogy would be: lasers in 1960 are like quantum computers in 1985. By 2000, lasers had transformed multiple industries. By 2025, quantum computers had transformed nothing.

Fiber Optics: Continuous Improvement Drives Adoption

Fiber optic communication was proposed theoretically by Charles Kao in 1966, who predicted that pure glass fibers could carry optical signals over long distances. The first practical low-loss fibers were demonstrated in 1970 by Corning Glass Works.

The fiber optics story illustrates continuous improvement driving commercial adoption:

  • 1970: First low-loss fiber (20 dB/km) — sufficient for short-distance communication
  • 1977: First commercial fiber optic telephone link (Chicago)
  • 1980s: Fiber backbone deployed across US telecommunications network
  • 1988: First transatlantic fiber optic cable (TAT-8)
  • 1990s-2000s: Dense wavelength division multiplexing (DWDM) multiplied capacity by 100x
  • 2010s-2020s: Global internet backbone is almost entirely fiber optic

Each decade brought improvements in fiber purity, laser sources, and multiplexing techniques. Crucially, fiber optics was commercially useful at every stage — the early 20 dB/km fibers were inferior to modern fibers by orders of magnitude but were still better than copper for specific applications. Revenue from commercial deployment funded the R&D that produced the next generation of improvements.

GPS Atomic Clocks: Quantum Technology That Delivers

The Global Positioning System relies on atomic clocks aboard satellites to provide precise timing signals. GPS represents a quantum technology (atomic clocks exploit quantum mechanical transitions) that was successfully developed, deployed, and commercialized by the US government.

GPS followed a clear development pathway:

  • 1973: GPS program initiated by US Department of Defense
  • 1978: First GPS satellite launched
  • 1983: President Reagan opens GPS to civilian use (after KAL 007 shootdown)
  • 1995: Full 24-satellite constellation operational
  • 2000: Selective Availability turned off, enabling full civilian precision
  • 2010s-2020s: GPS generates estimated $1.4 trillion in annual economic value

The GPS atomic clock story reinforces the quantum sensing thesis from Chapter 14: quantum technologies that work with quantum phenomena at small scale (individual atomic transitions) succeed, while quantum technologies that require large-scale entanglement (quantum computing) remain laboratory curiosities.

MRI: From Physics to Medicine in 30 Years

Magnetic resonance imaging (MRI) exploits the quantum mechanical property of nuclear spin to produce detailed images of biological tissue. The underlying physics (nuclear magnetic resonance) was discovered in 1938 and first observed experimentally in 1946. The first human MRI scan was performed in 1977, and commercial MRI machines were available by the 1980s.

Year Milestone Time from NMR Discovery
1938 NMR predicted theoretically (Rabi) 0 years
1946 NMR demonstrated experimentally (Bloch, Purcell) 8 years
1971 Damadian shows NMR distinguishes tumors from normal tissue 33 years
1977 First human MRI scan 39 years
1984 FDA approves first commercial MRI machine 46 years
2003 Lauterbur and Mansfield receive Nobel Prize for MRI 65 years

The MRI timeline is the longest among our successful cases, spanning 46 years from theoretical prediction to FDA-approved commercial product. Quantum computing proponents sometimes cite this as evidence that their technology "just needs more time." But the comparison has a critical flaw: MRI produced intermediate commercial products (NMR spectrometers for chemistry research, generating revenue from the 1950s onward) throughout its development. Quantum computing has no equivalent intermediate product.

Common Features of Successful Physics Bets

Across all five successful technologies, we can identify six structural features that predicted commercial success:

  1. Early practical demonstration: A working prototype existed within 5-10 years of the theoretical idea
  2. Continuous improvement pathway: Each generation was incrementally better without requiring fundamental breakthroughs
  3. Intermediate commercial products: Revenue was generated well before the technology reached its mature form
  4. Scalable manufacturing: The technology could be mass-produced using existing or readily developed industrial processes
  5. Clear customer demand: At least one industry recognized an immediate application
  6. Favorable scaling physics: Larger systems worked better (or at least no worse) than smaller ones
Technology Early Demo Continuous Improvement Intermediate Products Scalable Mfg Customer Demand Favorable Scaling
Transistor Yes (1947) Yes Yes (radios, ICs) Yes Yes (military, telecom) Yes
Laser Yes (1960) Yes Yes (medical, industrial) Yes Yes (surgery, manufacturing) Yes
Fiber optics Yes (1970) Yes Yes (telecom links) Yes Yes (telecoms) Yes
GPS clocks Yes (1978) Yes Yes (military nav) Yes Yes (military, civilian) Yes
MRI Yes (1977) Yes Yes (NMR spectrometers) Yes Yes (hospitals) Yes
Quantum computing Partial No No No Unclear No (worse)

Quantum computing fails on four of six criteria. Most critically, it fails on "favorable scaling physics" — adding more qubits introduces more errors, making larger systems harder to operate rather than easier. This is the opposite of the scaling dynamics that enabled every successful physics technology in our reference class.

Bias Alert

Fermi warns you When quantum computing companies say "we're like the transistor in 1948," they're making an implicit claim: that they share the structural features that made the transistor succeed. They don't. The transistor had a continuous improvement pathway from day one. Quantum computing faces a discontinuous barrier (fault-tolerant error correction) that no amount of incremental improvement can cross without a fundamental breakthrough. The correct analogy would be the transistor only if the transistor had required millions of perfectly coordinated vacuum tubes before it could do anything useful at all.

Failed Physics Bets

Cold Fusion: When the Science Is Wrong

In March 1989, electrochemists Martin Fleischmann and Stanley Pons announced they had achieved nuclear fusion at room temperature using a simple electrochemical cell — palladium electrodes immersed in heavy water. The claim, if true, would have solved the world's energy problems. The media response was instant and enormous.

The cold fusion episode unfolded with a timeline that is eerily instructive:

  • March 1989: Fleischmann and Pons hold press conference announcing cold fusion
  • April-June 1989: Major labs worldwide attempt to replicate; most fail
  • July 1989: DOE panel finds no convincing evidence for cold fusion
  • 1989-1990: Cold fusion is largely rejected by mainstream physics
  • 1990s-present: A small community of "true believers" continues research with no reproducible results

Cold fusion failed because the underlying physics was wrong — no nuclear process was occurring. This makes it a poor direct comparison for quantum computing, where the underlying physics (quantum superposition, entanglement) is well established. However, cold fusion offers critical lessons about institutional dynamics:

  • Premature announcement: Scientists announced before peer review, driven by competition
  • Media amplification: Headlines preceded verification
  • Sunk cost continuation: Some researchers continued for decades despite absent results
  • Career lock-in: Scientists who built their careers on cold fusion could not acknowledge failure

The structural dynamics — not the physics — are the relevant comparison. Quantum computing's institutional dynamics parallel cold fusion's in concerning ways.

Theranos: The Information Asymmetry Case Study

Theranos, founded in 2003 by Elizabeth Holmes, claimed to have developed technology to run hundreds of blood tests from a single finger prick. The company raised $700 million and reached a $9 billion valuation before collapsing when investigative journalism revealed that the technology did not work.

Theranos is not a physics parallel to quantum computing — it was outright fraud. However, it provides the most vivid case study in how information asymmetry enables technology deception at massive scale.

Information Asymmetry Feature Theranos Quantum Computing
Technology claims not independently verifiable by investors Yes — "trade secret" protection Partially — results published but benchmarks are debated
Charismatic founder dominates narrative Yes (Holmes) Yes (multiple CEOs: Preskill narrative, Google's marketing)
Board members lack technical expertise Yes (Kissinger, Shultz, Mattis) Partially — some investors lack physics background
Revenue claims disconnected from technology Yes (used third-party machines secretly) Yes (revenue from consulting/grants, not computation)
Critics dismissed or threatened Yes (legal intimidation) Yes (called "uninformed" or "anti-progress")
Red flags visible to domain experts Yes (clinical chemists skeptical) Yes (physicists skeptical of scaling claims)

The Theranos comparison is not an allegation of fraud against quantum computing companies. It is an allegation of structural similarity in the information asymmetry that enables speculative investment to persist despite insufficient technical evidence.

Key Insight

Fermi is thinking The Theranos lesson is not "all technology claims are fraudulent." The lesson is that information asymmetry between technologists and investors allows valuations to diverge dramatically from technical reality — and that the correction, when it comes, is sudden and devastating. The question for quantum computing investors is not "Is there fraud?" but "Is the information asymmetry large enough to sustain a valuation disconnected from technical reality?" The answer, given that domain expert skeptics are systematically ignored while optimistic narratives dominate, is clearly yes.

The Concorde: When the Technology Works But the Economics Don't

The Concorde supersonic airliner is the most relevant historical parallel for quantum computing because the technology actually worked. The Concorde flew passengers at Mach 2 for 27 years (1976-2003). It was a genuine engineering triumph. And it was an unambiguous economic disaster.

Key economic facts about the Concorde:

  • Development cost: £1.3 billion ($6 billion in 2025 dollars), split between British and French governments
  • Revenue recovery: Never recovered development costs; operational break-even only achieved in final years through creative accounting
  • Passenger capacity: 100 seats vs. 400+ for Boeing 747
  • Fuel consumption: 3x more fuel per passenger-mile than subsonic aircraft
  • Ticket price: 10-20x higher than subsonic equivalents
  • Routes operated: London-New York, Paris-New York (only two routes were commercially viable)
  • Total aircraft built: 20 (only 14 entered service), vs. 1,500+ Boeing 747s

The Concorde demonstrates that technological achievement and economic viability are independent variables. A technology can work perfectly — supersonic flight was never in question — while failing economically because the cost-performance ratio does not justify adoption.

Factor Concorde Quantum Computing
Does the technology work? Yes (Mach 2 flight) Maybe (small scale only)
Does it outperform alternatives? Yes (speed) Unknown (no proven advantage)
At what cost multiple? 10-20x subsonic 1,000-1,000,000x classical (estimated)
Customer willingness to pay premium? Very few (elite travelers) Unknown
Scalable to mass market? No (physics limits: sonic boom, fuel) No (physics limits: error correction, cooling)
Economically viable? No Very unlikely

The Concorde parallel is devastating for quantum computing because it concedes the strongest version of the proponent's case — even if quantum computing works exactly as promised — and still arrives at the conclusion that it will not be economically viable. If a technology costs 1,000x more than classical alternatives to solve the same problem, it does not matter that it works on different physics principles.

Diagram: Historical Parallel Timeline

Physics-Based Technology Bets Timeline

Type: timeline sim-id: physics-bet-timeline
Library: vis-timeline
Status: Specified

Bloom Taxonomy: Analyze (L4) Bloom Verb: compare, contrast, organize Learning Objective: Students will compare the timelines from theoretical discovery to commercial viability across successful and failed physics-based technologies, organizing them to identify structural patterns that predict success or failure.

Instructional Rationale: A visual timeline enables the Analyze objective by allowing students to see, at a glance, how long each technology took from discovery to commercial revenue (or failure). The contrast between successful technologies (short gaps) and failed ones (no commercial milestone) is immediately apparent in visual form but requires laborious calculation from text.

Time period: 1938-2030

Orientation: Horizontal, scrollable

Groups (color-coded rows): 1. "Successful Bets" (green background #E8F5E9): - Transistor: invention 1947, commercial radio 1954, IC 1958, microprocessor 1971 - Laser: demonstration 1960, medical use 1961, barcode 1974, CD player 1982 - Fiber Optics: low-loss fiber 1970, first commercial link 1977, transatlantic cable 1988 - GPS Clocks: program start 1973, first satellite 1978, civilian access 1983, full constellation 1995 - MRI: NMR demonstrated 1946, first scan 1977, FDA approval 1984

  1. "Failed Bets" (red background #FFEBEE):
  2. Cold Fusion: announcement 1989, DOE rejection 1989, continued fringe research (ongoing, dimming marker)
  3. Theranos: founded 2003, peak valuation 2014, collapse 2018, conviction 2022
  4. Concorde: first flight 1969, commercial service 1976, retired 2003

  5. "Quantum Computing" (yellow background #FFF8E1):

  6. Feynman proposal 1982, Shor's algorithm 1994, first 2-qubit gate 1998, "quantum supremacy" claim 2019, commercial applications: "?" marker at 2030+

Milestone markers: - Green diamonds: First commercial revenue - Red X markers: Failure/rejection point - Yellow question mark: Unknown future milestone

Interactive features: - Hover over any milestone: tooltip with date, description, and significance - Click on a technology name: highlight that technology's full timeline, dim others - Zoom in/out with mouse wheel - Drag to scroll through time - Toggle button: "Show time-to-revenue" adds vertical lines from invention to first revenue for each successful technology

Special visual element: - A horizontal bracket labeled "Time to first revenue" spanning from invention to first commercial product for each successful technology - Quantum computing's bracket extends off the right edge with "40+ years and counting..."

Canvas: Responsive width, 500px height Implementation: vis-timeline with groups, custom styling, hover tooltips

Charismatic Founder Risk

Across both successful and failed technologies, one pattern stands out as a warning sign rather than a positive indicator: the dominance of a charismatic founder whose personal narrative substitutes for technical evidence.

Successful technologies rarely depend on a single charismatic figure. The transistor was developed by a team (Bardeen, Brattain, Shockley) within a large research institution (Bell Labs). Lasers were independently invented by multiple groups. Fiber optics was a collective engineering effort across Corning, AT&T, and academic labs. The technologies succeeded because the physics worked, not because a compelling individual made the case.

Failed technologies, by contrast, frequently feature charismatic individuals whose personal credibility substitutes for reproducible results:

  • Cold fusion: Fleischmann was a respected electrochemist whose reputation initially shielded the claim from scrutiny
  • Theranos: Elizabeth Holmes's personal charisma convinced billionaires and former Secretaries of State to invest without due diligence
  • Concorde: Government champions in both Britain and France made the project a matter of national prestige rather than economic calculation

The charismatic founder risk manifests when:

  • An individual's vision is treated as more compelling than experimental evidence
  • Criticism is deflected as personal attacks on the founder rather than technical objections
  • The founder's media presence grows faster than the technology's capabilities
  • Investors cite the founder's qualities ("brilliant," "visionary") rather than technical metrics

In the quantum computing ecosystem, multiple charismatic figures play this role — company CEOs, prominent professors, and government science advisors whose personal authority is used to justify continued investment. The question is not whether these individuals are intelligent (they are), but whether their authority is being used as a substitute for evidence of commercial viability.

Fermi's Tip

Fermi shares a tip When evaluating any technology investment, apply the "anonymous claim" test: if the same technical claims were made by an anonymous researcher with no reputation or charisma, would the evidence alone be sufficient to justify the investment? If the answer is no — if the investment depends on who is making the claim rather than what the evidence shows — you are witnessing charismatic founder risk.

When Scientists Mislead

Scientists can mislead without committing fraud. The spectrum from honest science to deliberate deception includes several intermediate positions that are relevant to quantum computing:

Category Description Quantum Computing Examples
Honest uncertainty Genuine scientific questions without clear answers "We don't know if error correction will scale"
Optimistic framing Presenting results in the most favorable light "We achieved 99.5% gate fidelity" (omitting that 99.99% is needed)
Selective reporting Publishing positive results, filing away negative ones Companies announcing qubit records but not error rate plateaus
Misleading metrics Using technically accurate but practically meaningless benchmarks "Quantum volume doubled" (without noting it remains useless for applications)
Omission of context Failing to provide information needed for informed evaluation "Quantum advantage demonstrated" (without noting the problem was contrived)
Conflict-of-interest-driven claims Financial stakes influence scientific communication CEOs with stock options announcing "breakthroughs"

Most quantum computing communication falls in the middle of this spectrum — optimistic framing, selective reporting, and misleading metrics rather than outright fabrication. The physics is real; the misleading part is the implication that laboratory demonstrations translate to commercial viability.

The difference between honest uncertainty and misleading communication often comes down to what is omitted. When a quantum computing company announces a qubit count milestone, the honest context — that error rates remain too high for useful computation, that error correction would require 1,000x more qubits, and that classical computers already solve the demonstrated problems faster — is systematically omitted. This is not fraud, but it is not honest science either.

Diagram: Technology Success Pattern Analyzer

Technology Success Pattern Analyzer MicroSim

Type: microsim sim-id: tech-success-analyzer
Library: p5.js
Status: Specified

Bloom Taxonomy: Evaluate (L5) Bloom Verb: assess, judge, justify Learning Objective: Students will assess any technology against the six structural success criteria identified in this chapter, judge whether the technology's profile matches historical successes or failures, and justify their conclusion with specific evidence.

Instructional Rationale: An interactive scoring tool is appropriate for the Evaluate objective because students must make judgments on each criterion and see how the aggregate score compares to historical reference technologies. A passive visualization would reduce this to a Remember exercise.

Canvas layout: - Left panel (60% width): Radar/spider chart showing scores across six criteria - Right panel (40% width): Control panel with scoring inputs and comparison tools

Interactive controls: - Technology selector dropdown (pre-loaded options): "Transistor", "Laser", "Fiber Optics", "GPS Clocks", "MRI", "Cold Fusion", "Theranos", "Concorde", "Quantum Computing", "Custom (enter your own)" - When a pre-loaded technology is selected, scores auto-populate based on chapter data - When "Custom" is selected, six sliders activate (0-10 each): 1. "Early Practical Demonstration" (0 = none, 10 = immediate) 2. "Continuous Improvement Pathway" (0 = discontinuous barrier, 10 = clear continuous path) 3. "Intermediate Commercial Products" (0 = none, 10 = multiple revenue streams) 4. "Scalable Manufacturing" (0 = impossible, 10 = mass production ready) 5. "Clear Customer Demand" (0 = no identified market, 10 = urgent demand) 6. "Favorable Scaling Physics" (0 = scaling makes it worse, 10 = scaling improves it) - "Compare to..." dropdown: overlay a second technology's radar chart in dashed lines - "Show Success Zone" toggle: highlight the radar region where all successful technologies score

Visual elements (left panel): - Six-axis radar chart with labeled axes - Filled polygon for primary technology (color-coded: green for score > 40, yellow for 20-40, red for < 20) - Dashed polygon outline for comparison technology - Shaded "success zone" (optional toggle) showing minimum scores of all five successful technologies - Total score displayed at center of radar chart

Pre-loaded data: - Transistor: [9, 10, 9, 10, 9, 10] = 57/60 - Laser: [9, 9, 8, 9, 7, 9] = 51/60 - Fiber Optics: [8, 9, 8, 9, 9, 9] = 52/60 - GPS Clocks: [8, 8, 7, 8, 8, 8] = 47/60 - MRI: [7, 8, 7, 7, 9, 7] = 45/60 - Cold Fusion: [2, 0, 0, 0, 10, 0] = 12/60 - Theranos: [1, 0, 0, 0, 8, 0] = 9/60 - Concorde: [8, 3, 2, 1, 2, 1] = 17/60 - Quantum Computing: [4, 1, 0, 0, 3, 0] = 8/60

Output panel (right side): - Numerical total: "Score: X/60" - Category label: "Profile: Strong Match to [Successes/Failures]" - Text: "Most similar to: [closest historical match by Euclidean distance]" - Warning banner if score < 20: "Historical technologies scoring below 20 have never achieved commercial viability"

Background: aliceblue Canvas: Responsive width, 550px height

Implementation: p5.js with radar chart drawing, dropdown selectors, slider controls, distance calculation

The Pattern: What History Teaches

Drawing together all the cases in this chapter, we can articulate the central lesson of historical technology analysis. Successful physics-based technologies share a common developmental pattern:

  1. Early demonstration (within 5-10 years of theoretical proposal)
  2. Continuous improvement that generates incremental commercial value
  3. Intermediate products that fund further development
  4. Scaling that helps rather than hurts performance
  5. Multiple independent validation by teams with no financial stake

Failed physics-based technologies share a different pattern:

  1. Extended development without practical demonstration (decades)
  2. Discontinuous barrier requiring fundamental breakthrough before any commercial value
  3. No intermediate products — the technology is "all or nothing"
  4. Scaling that introduces new problems (error accumulation, cost explosion)
  5. Validation concentrated among financially interested parties

Quantum computing matches the failed technology pattern on four of five criteria. Its only saving grace compared to cold fusion is that the underlying physics is validated at small scale — quantum superposition and entanglement are real phenomena. But validated physics is necessary and not sufficient for commercial viability, as the Concorde demonstrates. Supersonic flight physics was never in question; the economics were.

Bias Alert

Fermi warns you Beware the "survivorship bias" in technology analogies. When proponents say "they doubted the transistor too," they're selectively citing a survivor. For every transistor, there are dozens of physics-based technologies that were doubted and rightfully so — they never worked commercially. The relevant question is not "were successful technologies ever doubted?" (yes, always) but "what percentage of doubted technologies actually succeeded?" (very small). Doubt is the default position; success is the exception that requires extraordinary evidence.

Diagram: Success vs. Failure Pattern Comparison

Success vs. Failure Pattern Comparison Chart

Type: chart sim-id: success-failure-patterns
Library: Chart.js
Status: Specified

Bloom Taxonomy: Analyze (L4) Bloom Verb: compare, contrast, distinguish Learning Objective: Students will compare the structural profiles of successful and failed technologies across the six criteria, contrast them with quantum computing's profile, and distinguish which reference class quantum computing most closely matches.

Instructional Rationale: A grouped bar chart enables the Analyze objective by placing the six criteria side by side for three groups (success average, failure average, quantum computing), making structural pattern matching visual and immediate.

Chart type: Grouped horizontal bar chart

Y-axis: Six criteria (categorical): 1. "Early Practical Demonstration" 2. "Continuous Improvement Pathway" 3. "Intermediate Commercial Products" 4. "Scalable Manufacturing" 5. "Clear Customer Demand" 6. "Favorable Scaling Physics"

X-axis: Score (0-10)

Data series: 1. "Successful Technologies (Average)" — green bars #388E3C Scores: [8.2, 8.8, 7.8, 8.6, 8.4, 8.6]

  1. "Failed Technologies (Average)" — red bars #E53935 Scores: [3.7, 1.0, 0.7, 0.3, 6.7, 0.3]

  2. "Quantum Computing" — orange bars #FF7043 Scores: [4, 1, 0, 0, 3, 0]

Interactive features: - Hover over any bar: tooltip showing technology name, criterion, score, and brief explanation - Click on criterion label: expand to show individual technology scores for that criterion - Toggle buttons: show/hide each data series independently - "Show Gap Analysis" button: overlays arrows showing the gap between QC and success average

Title: "Technology Success Pattern Analysis: Where Does Quantum Computing Fall?" Legend: Positioned at top Canvas: Responsive width, 500px height Background: aliceblue

Implementation: Chart.js horizontal bar chart with grouped datasets, custom click handlers

Applying History to Quantum Computing

With the historical reference class established, we can now state the argument concisely. Quantum computing has been under active development for over 40 years. In that time:

  • No commercial product has been produced (unlike every successful case)
  • No continuous improvement pathway exists (unlike every successful case)
  • No intermediate revenue has been generated from computation (unlike every successful case)
  • Scaling makes the problem harder, not easier (unlike every successful case)
  • The commercial case rests on future breakthroughs, not demonstrated capability (like every failed case)

History does not prove that quantum computing will fail. But it shows that technologies with this structural profile have never succeeded commercially. The burden of evidence lies with those claiming that quantum computing will be the first exception — and they have not yet met it.

Chapter Summary

Excellent Investigative Work!

Fermi celebrates You now have a historical reference class for evaluating physics-based technology investments. You can identify the structural features — continuous improvement, intermediate products, favorable scaling — that separate technologies that transform civilization from those that consume billions and deliver nothing. You understand why charismatic founders and information asymmetry are warning signs, not reassurances. And you can articulate why quantum computing's structural profile more closely matches the failures than the successes. That's historical analysis as an investment tool. Outstanding work, fellow investigator!

Review Questions

Question 1: Identify the six structural features that successful physics-based technologies share and explain why each matters for commercial viability.

The six features are: (1) Early practical demonstration — confirms the technology works outside theory; (2) Continuous improvement pathway — enables incremental commercial value without requiring fundamental breakthroughs; (3) Intermediate commercial products — generate revenue that funds further development, creating a virtuous cycle; (4) Scalable manufacturing — allows mass production at decreasing unit cost; (5) Clear customer demand — ensures a market exists for the technology's capabilities; (6) Favorable scaling physics — means larger systems perform better, enabling growth. Each feature reduces risk by providing empirical evidence of viability at progressively larger scales. Quantum computing fails on features 2, 3, 4, and 6 — the same features where all failed technologies scored low.

Question 2: Why is the Concorde a more relevant parallel to quantum computing than cold fusion?

Cold fusion failed because the underlying physics was wrong — no nuclear process was occurring. This makes it a weak comparison because quantum computing's underlying physics (superposition, entanglement) is validated. The Concorde is more relevant because the physics worked perfectly — supersonic flight was never in question — but the economics made it commercially unviable. Concorde's costs were 10-20x higher than subsonic alternatives, limiting it to a tiny niche market. Similarly, even if quantum computing achieves fault-tolerant operation, the cost per computation may be 1,000x or more than classical alternatives, rendering it economically impractical for all but the most exotic applications. The Concorde proves that "the technology works" is not sufficient for commercial success.

Question 3: Explain how the 'anonymous claim test' helps investors identify charismatic founder risk.

The anonymous claim test asks: if the same technical claims were made by an anonymous researcher with no reputation, media presence, or personal charisma, would the evidence alone justify the investment? If yes, the investment is grounded in technical merit. If no — if the investor's confidence depends on who is making the claim rather than what the evidence shows — then charismatic founder risk is present. This test strips away authority bias and forces evaluation of the evidence itself. In quantum computing, many investment decisions are influenced by the prominence of individual advocates (CEOs, professors, government advisors) rather than by independent verification of commercial viability claims.

Question 4: How does information asymmetry enable technology investments to persist despite insufficient technical evidence?

Information asymmetry occurs when one party (technologists) possesses critical knowledge that the other party (investors) lacks. In technology investment, this manifests as: investors cannot independently verify technical claims; companies classify results as proprietary; technical benchmarks require domain expertise to interpret; and negative results are not published. This asymmetry allows valuations to diverge from technical reality because investors rely on the company's self-reported claims. The Theranos case is the extreme example — the technology didn't work at all, but information asymmetry sustained a $9 billion valuation for years. In quantum computing, the asymmetry is less extreme but structurally similar: most investors and policymakers cannot evaluate whether qubit count milestones translate to commercial viability, so they rely on the optimistic narratives of financially interested parties.

Question 5: Using the historical reference class framework, evaluate a technology not discussed in this chapter (e.g., fusion energy, brain-computer interfaces, or flying cars) against the six success criteria.

For fusion energy: (1) Early demo: Partially — fusion reactions achieved since 1950s, but net energy gain only claimed in 2022 (NIF) under non-replicable conditions. Score: 4/10. (2) Continuous improvement: No — fusion faces a discontinuous barrier (sustained net energy gain) similar to quantum computing's error correction barrier. Score: 2/10. (3) Intermediate products: No — no commercial revenue from fusion energy after 70+ years of research. Score: 0/10. (4) Scalable manufacturing: Unknown — ITER-scale reactors cost $25B+ each. Score: 1/10. (5) Customer demand: Yes — enormous demand for clean energy. Score: 10/10. (6) Favorable scaling: Unclear — larger tokamaks have better confinement but far greater engineering complexity. Score: 3/10. Total: 20/60. This places fusion energy in the same structural category as the Concorde and quantum computing — technologies where real physics and real demand exist, but the engineering pathway to economic viability remains undemonstrated.