Quiz: The Emperor's New Algorithm
Test your ability to distinguish between what AI actually does and what press releases claim it does.
1. The term "artificial intelligence" was coined in 1956 at a conference at Dartmouth College. How long did the researchers estimate it would take to simulate human intelligence?
- One summer
- Ten years
- Fifty years
- They declined to provide a timeline, citing insufficient training data
Show Answer
The correct answer is A. The researchers proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" and estimated this would take one summer. It has taken somewhat longer. The gap between this prediction and reality is the origin story of five years away syndrome.
Concept Tested: Artificial Intelligence
2. In machine learning, what is "training data"?
- The final exam questions leaked to students before the test
- A collection of labeled examples from which a model learns patterns
- The motivational posters displayed in the data center
- Any data generated after the model has been deployed to production
Show Answer
The correct answer is B. Training data is the collection of examples — labeled with correct answers — from which a machine learning model learns. The quality of the training data determines what the model learns, what it fails to learn, and what it learns incorrectly. The chapter updates the classic computing maxim to: "extremely large amounts of garbage in, extremely fluent garbage out."
Concept Tested: AI Training Data
3. What does a large language model fundamentally do?
- Understands human language and reasons about the world
- Predicts the next word in a sequence based on statistical patterns
- Stores a complete copy of the internet and retrieves relevant passages
- Simulates human consciousness using quantum-encrypted neural pathways
Show Answer
The correct answer is B. An LLM predicts the next word in a sequence — that is fundamentally all it does. Given "The cat sat on the," the model predicts "mat." The remarkable thing is that this simple mechanism, repeated billions of times, produces systems capable of writing essays, generating code, and conducting conversations. The mechanism is simple. The output is impressive. The understanding is absent.
Concept Tested: Large Language Model
4. AI hallucination is best described as which of the following?
- A rare malfunction caused by overheating in the data center
- The production of confident, fluent text that is factually incorrect or fabricated
- A creative feature designed to generate novel ideas
- An error that occurs only when the model encounters questions about unicorns
Show Answer
The correct answer is B. AI hallucination occurs when a model generates text that is factually incorrect, internally inconsistent, or entirely fabricated — while presenting it with the same confidence as accurate information. The chapter emphasizes that hallucination is not a malfunction: the model "was built to produce fluent text, not accurate text. That these two goals are sometimes in conflict is the central design problem of modern AI."
Concept Tested: AI Hallucination
5. The "Trough of Disillusionment" in the Gartner Hype Cycle is the phase where which of the following occurs?
- Venture capital funding reaches its peak
- The technology is declared revolutionary by journalists for the first time
- Reality reasserts itself and failed implementations are acknowledged
- The technology achieves mainstream adoption and becomes boring
Show Answer
The correct answer is C. The trough is the inevitable correction after the peak. Experiments and implementations fail to deliver. The technology is declared "dead" by the same journalists who declared it "revolutionary" eighteen months earlier. The trough is painful but necessary — it is where the technology begins to be evaluated on its merits rather than its mythology.
Concept Tested: Hype Cycle
6. According to the chapter, neural networks are "inspired by the brain." How accurate is this comparison?
- Extremely accurate — neural networks replicate brain function precisely
- The comparison flatters the network, since the brain runs on 20 watts and can count the R's in "strawberry"
- The comparison flatters the brain, since neural networks are more efficient
- The comparison is equally unflattering to both parties
Show Answer
The correct answer is B. A human brain contains 86 billion neurons connected by 100 trillion synapses and runs on 20 watts. A large language model contains far fewer parameters, requires a data center, and "still cannot reliably count the number of R's in 'strawberry.'" The naming convention is generous at best, as the chapter's Sparkle observes with characteristic understatement.
Concept Tested: Neural Network
7. Which type of machine learning learns by trial and error with rewards?
- Supervised learning
- Unsupervised learning
- Reinforcement learning
- Self-supervised learning
Show Answer
The correct answer is C. Reinforcement learning learns by trial and error with rewards — the approach used for game-playing AI, among other applications. Supervised learning uses labeled examples (spam detection). Unsupervised learning finds patterns in unlabeled data (customer segmentation). Self-supervised learning creates its own labels from raw data (language model pre-training).
Concept Tested: Machine Learning
8. The chapter describes AI hype culture as a self-reinforcing ecosystem. Which of the following is NOT identified as a component?
- Tech press releases that announce breakthroughs with the frequency of horoscopes
- Corporate strategies consisting of adding "powered by AI" to existing products
- Peer-reviewed studies that independently verify all claims before publication
- Social media discourse oscillating between "AI will solve everything" and "AI will destroy everything"
Show Answer
The correct answer is C. Independent peer-reviewed verification is conspicuously absent from the AI hype ecosystem. The ecosystem consists of press releases, investor presentations, conference keynotes, social media discourse, and corporate AI strategies — each participant with incentives to maintain enthusiasm regardless of whether the technology justifies it. The cycle feeds itself, and nobody benefits from slowing it down.
Concept Tested: AI Hype Culture
9. The chapter title references Hans Christian Andersen's "The Emperor's New Clothes." What is the parallel to AI?
- AI does not exist, like the emperor's clothes
- AI works but the claims about it are the invisible garments — visible only to those who believe hard enough
- AI researchers are swindlers who have deceived the public
- The child who says the emperor is naked will receive Series A funding
Show Answer
The correct answer is B. Unlike the emperor's clothes, AI genuinely works and represents a significant technological advance. But the claims made about AI, the valuations assigned to AI companies, and the expectations set by the hype cycle are the invisible garments. The engineer who says "this model hallucinates 15% of the time" at a board meeting is the child. Both are stating observable facts. Both are socially inconvenient.
Concept Tested: AI Capabilities
10. According to the chapter, what is the most important gap in modern technology?
- The gap between iPhone models
- The gap between what AI can do and what people believe AI can do
- The gap between the server room temperature and ambient room temperature
- The gap between a chatbot's response time and a human's patience
Show Answer
The correct answer is B. The gap between AI capabilities and AI limitations is "the unicorn's horn — the feature that makes the ordinary extraordinary, the feature that may or may not be real, depending on the lighting and the investor." Organizations that automate based on the capability list without consulting the limitation list, the chapter warns, "tend to meet the kraken."
Concept Tested: AI Limitations