Skip to content

Quiz: What Is a General Purpose Technology?

Test your understanding of General Purpose Technology theory, historical GPTs, and why quantum computing fails to qualify.


1. Who formalized the concept of General Purpose Technologies, and in what year was their foundational paper published?

  1. Bresnahan and Trajtenberg in 1995
  2. Moore and Noyce in 1965
  3. Schumpeter and Keynes in 1942
  4. Berners-Lee and Cerf in 1991
Show Answer

The correct answer is A. Timothy Bresnahan and Manuel Trajtenberg formalized the GPT framework in their 1995 paper "General Purpose Technologies: Engines of Growth?" This theory provides a rigorous framework for identifying the handful of technologies in human history that have fundamentally restructured economies and societies, distinguishing truly transformative technologies from important but narrow ones.

Concept Tested: General Purpose Technology


2. Which of the following is NOT one of the three required characteristics of a General Purpose Technology?

  1. Must be broadly applicable across industries
  2. Must improve over time in performance or cost
  3. Must generate revenue within five years of invention
  4. Must enable complementary innovations and new industries
Show Answer

The correct answer is C. The three GPT criteria are: (1) broad applicability across many industries and sectors, (2) sustained improvement in performance or cost over decades, and (3) enabling complementary innovations — new products, industries, and technologies. Generating revenue within a specific timeframe is not a GPT criterion. All three criteria are necessary; a technology satisfying only one or two does not qualify as a GPT.

Concept Tested: GPT Characteristics


3. The transistor's cost fell from approximately $1 in the 1960s to approximately \(10^{-10}\) in the 2020s. Which GPT criterion does this primarily demonstrate?

  1. Broadly applicable
  2. Improves over time
  3. Enables new innovations
  4. Replaces all previous technologies
Show Answer

The correct answer is B. A trillion-fold cost reduction over 60 years is the defining example of sustained improvement, the second GPT criterion. Moore's Law drove transistor density to double approximately every two years for over six decades. This sustained improvement trajectory is what made the transistor increasingly valuable and drove adoption across ever-expanding sectors, a hallmark of a genuine GPT.

Concept Tested: Must Improve Over Time


4. A technology investor claims that quantum computing will be "as transformative as electricity." Using the GPT framework, which criterion presents the strongest counterargument?

  1. Quantum computing has not improved at all since its invention
  2. Quantum computing costs more than electricity per unit of output
  3. Quantum computing was invented more recently than electricity
  4. Quantum computing is applicable to less than 1% of global computation, while electricity powers over 95% of economic activity
Show Answer

The correct answer is D. The broad applicability criterion is where the comparison most dramatically fails. Electricity powers lighting, heating, cooling, manufacturing, transportation, communication, computation, and virtually every modern device — affecting over 95% of economic activity. Quantum computing could theoretically benefit approximately 1% of global computation, limited to problems with specific mathematical structure. A technology applicable to 1% of computation is a specialty tool, not a broadly applicable GPT.

Concept Tested: QC Is Narrowly Applicable


5. What fraction of global computational workloads could theoretically benefit from quantum computing, even under the most generous assumptions?

  1. Approximately 25%
  2. Approximately 1%
  3. Approximately 10%
  4. Approximately 50%
Show Answer

The correct answer is B. The chapter's analysis of global computational workloads shows that approximately 1.2% could theoretically benefit from quantum computing — including cryptanalysis (~1%), quantum system simulation (~0.1%), and certain optimization variants (~0.1%). The remaining ~99% — web serving, AI/ML, databases, media processing, business applications, and classical scientific simulation — receives zero benefit. Even this 1.2% requires hardware breakthroughs that may never materialize.

Concept Tested: QC Is Narrowly Applicable


6. How does AI/ML demonstrate the "enables new innovations" GPT criterion?

  1. AI/ML has spawned new industries including AI-generated content, autonomous systems, and personalized medicine that could not exist without modern ML
  2. AI/ML has attracted more venture capital funding than any other technology
  3. AI/ML uses more electricity than any previous computing technology
  4. AI/ML has been researched for over 70 years, longer than quantum computing
Show Answer

The correct answer is A. The third GPT criterion requires spawning complementary innovations — new products, industries, and technologies that could not exist without the GPT. AI/ML has enabled AI-generated content (text, images, video, code), autonomous systems, personalized medicine, real-time language translation, and new forms of human-computer interaction. These represent multi-billion-dollar industries forming around AI applications. By contrast, quantum computing has spawned zero complementary innovations — no new industries, products, or applications depend on it.

Concept Tested: AI/ML as Emerging GPT


7. Why does the chapter argue that quantum computers would be co-processors rather than general-purpose computers, even if they achieve fault tolerance?

  1. Quantum computers are too expensive to operate continuously
  2. Quantum computing patents are owned by too many different companies
  3. Fundamental constraints — measurement destroys states, the no-cloning theorem prevents copying, and error rates are 15 orders of magnitude worse than classical — prevent general-purpose use
  4. Government regulations prohibit using quantum computers for general-purpose tasks
Show Answer

The correct answer is C. Multiple fundamental constraints prevent quantum computers from replacing classical machines: measurement collapses quantum states (no persistent memory), the no-cloning theorem prevents copying data (essential for classical computing), quantum gate error rates of \(\sim 10^{-3}\) are 15 orders of magnitude worse than classical transistors at \(\sim 10^{-18}\), and I/O bottlenecks limit data loading and extraction. These are physics constraints, not engineering problems, making quantum computers specialized accelerators analogous to GPUs or FPGAs.

Concept Tested: QC Cannot Replace Classical


8. A proponent argues quantum computing is "too early to judge" — that we are in the equivalent of 1885 for electricity. Why does this analogy fail?

  1. Electricity was never considered a risky investment
  2. By 1885, electricity was already powering streetlights, factories, and trolley systems with a visibly emerging innovation ecosystem, while quantum computing at a comparable 30-year stage has powered zero commercial applications
  3. The 1880s had no formal investment frameworks for comparison
  4. Electricity was entirely government funded, unlike quantum computing
Show Answer

The correct answer is B. By 1885, just three years after Edison's first power station, electricity was powering streetlights, small factories, and the first electric trolley systems. Within a decade it enabled elevators and skyscrapers. The complementary innovation ecosystem was visibly emerging. Quantum computing, at approximately 30 years from first theoretical proposals, has not powered a single commercial application. Moreover, the "too early" argument is unfalsifiable — it can be deployed indefinitely regardless of how long the technology fails to produce results.

Concept Tested: QC Fails Every GPT Test


9. Over its first 30 years, quantum computing increased qubit count by approximately 20x while the transistor increased density by approximately 25,000x. What does this comparison reveal about GPT qualification?

  1. Quantum computing is improving faster than the transistor did and will soon qualify as a GPT
  2. The comparison is invalid because qubits and transistors measure different things
  3. Both technologies show comparable improvement and will eventually converge
  4. Quantum computing's improvement trajectory is far too slow to meet the sustained improvement criterion, especially since useful computational capacity has improved only marginally
Show Answer

The correct answer is D. The GPT criterion requires sustained improvement by orders of magnitude over decades. The transistor achieved a 25,000x improvement in its first 30 years while already generating billions in revenue. Quantum computing achieved only a ~20x improvement in qubit count, and — critically — useful computational capacity (qubit count times gate fidelity times connectivity times coherence time) has improved only marginally because error rates have barely budged from \(\sim 10^{-3}\). The improvement trajectory is roughly 1,000x slower than what genuine GPTs demonstrate.

Concept Tested: Must Improve Over Time


10. Why does the chapter argue that the $100+ billion invested in quantum computing is "mispriced"?

  1. The money was invested in the wrong companies
  2. Quantum computing should have received $200 billion instead
  3. The investment level assumes GPT-scale transformative returns, but GPT analysis shows quantum computing is at best a specialized co-processor for less than 1% of computation
  4. The investment should have been spread equally across all quantum computing companies
Show Answer

The correct answer is C. The investment thesis implicitly assumes quantum computing will be transformative on the scale of the transistor or the internet — technologies that reshaped entire economies. The GPT analysis shows quantum computing fails all three criteria and is, at best, a specialized co-processor for a tiny fraction of computation. Co-processors can be valuable (GPUs generate billions in revenue), but the scale of returns is fundamentally different from a GPT. The $100+ billion investment is calibrated to GPT-level returns, not co-processor-level returns.

Concept Tested: QC Fails Every GPT Test