Quiz: The Age of AI and Technology Power (2010–Present)¶
Test your understanding of the semiconductor supply chain, U.S.-China chip wars, AI-enabled disinformation, state-sponsored cyber warfare, drone warfare in Ukraine, autonomous weapons, and the historical patterns of technology power shifts with these review questions.
1. TSMC (Taiwan Semiconductor Manufacturing Company) is the sole manufacturer of advanced semiconductors below 5 nanometers, located 100 miles from mainland China. Applying historical comparison to the 1973 OPEC oil embargo, which structural similarity makes this concentration a comparable geopolitical risk?¶
- Both TSMC's location and OPEC's membership are controlled by nations that are ideologically hostile to the United States — the geopolitical risk is primarily driven by anti-American sentiment rather than by supply concentration itself
- Both represent extreme concentration of a strategically essential commodity in a geographically and politically vulnerable location — just as the 1973 embargo demonstrated how oil concentration could produce economic chaos in importing nations, Taiwan's vulnerability means a disruption of TSMC's operations would cascade through the entire global technology economy that depends on advanced chips
- Both cases involve cartels — OPEC is an oil producer cartel and TSMC operates as a de facto semiconductor cartel — and both could be addressed by international antitrust enforcement targeting supplier coordination
- The comparison is inapt — oil is a fungible commodity that can be sourced from many suppliers, while advanced semiconductors require unique expertise that cannot be replicated quickly, making the semiconductor risk categorically more severe and historically unprecedented
Show Answer
The correct answer is B. The 1973 OPEC oil embargo demonstrated how concentration of a strategic commodity in a geographically and politically specific location could be weaponized to produce economic disruption in importing nations. The structural parallel to TSMC is precise: the United States and the global technology economy depend on TSMC's advanced chips the way 1970s America depended on imported oil — with limited near-term alternatives. A Chinese invasion or blockade of Taiwan would cut off advanced chip supply simultaneously for Apple, NVIDIA, AMD, and every other company that relies on TSMC manufacturing. The post-1973 political response — domestic energy production investment, energy efficiency standards, diversification of supply — offers a rough template for the policy response to semiconductor concentration risk (the CHIPS Act's domestic fabrication investment being the closest analogue). The historical comparison provides both analytical framework and policy precedent.
Concept Tested: TSMC / Semiconductor Geopolitics / Historical Comparison
2. Machine learning systems learn statistical patterns from training data rather than following explicitly programmed rules. Applying critical thinking to evaluate AI-generated claims, which property of machine learning creates the most significant challenge for verifying AI outputs?¶
- Machine learning systems are computationally expensive to run, making it impractical to verify their outputs by running the same query multiple times to check for consistency
- Machine learning systems can generate plausible-sounding, grammatically correct text that is factually incorrect — because they optimize for statistical plausibility rather than factual accuracy, they "hallucinate" confident-sounding false claims that appear indistinguishable from accurate ones
- Machine learning systems are inherently biased toward the political positions most prevalent in their training data, making their outputs systematically unreliable for any politically contested claim
- Machine learning systems cannot explain their reasoning — they produce outputs but cannot provide the chain of logic that produced them — making it impossible to evaluate whether the reasoning process was valid even when the output appears correct
Show Answer
The correct answer is B. The property of machine learning most directly relevant to information verification is "hallucination" — the tendency to generate plausible, confident-sounding text that is factually incorrect. Because LLMs optimize for statistical plausibility (generating text that resembles their training data) rather than factual accuracy, they produce false claims with the same confident, fluent prose as accurate ones. A user cannot distinguish an accurate LLM claim from a hallucinated one by the quality of the prose, the specificity of the claim, or the confidence of the presentation — all of these features can appear in both accurate and false outputs. This makes the verification practices developed throughout this textbook — lateral reading, source triangulation, checking specific verifiable claims — more essential rather than less when encountering AI-generated content: the surface quality of the output provides no reliable signal about its accuracy.
Concept Tested: Machine Learning / AI Hallucination / Critical Thinking
3. The U.S. export controls on AI chips (October 2022) prohibited sale of advanced NVIDIA GPUs to China. Applying second-order thinking, which consequence of this policy was most predictable from historical analogies to technology embargoes?¶
- China immediately halted all AI development, demonstrating that export controls on critical inputs are the most effective tool for maintaining technological advantage over strategic competitors
- Export controls accelerated China's investment in domestic semiconductor development — the embargo that was intended to prevent China from accessing advanced AI capabilities created the political urgency and economic incentive to develop them domestically, potentially producing a more capable Chinese chip industry than would have existed without the controls
- The export controls produced no significant effect on China's AI development because Chinese companies had already stockpiled sufficient NVIDIA chips before the controls took effect
- Allied nations immediately adopted equivalent export controls, creating a unified Western technology embargo that was far more effective than unilateral U.S. controls would have been
Show Answer
The correct answer is B. Second-order thinking asks: what happens after the first-order effect? The first-order effect of export controls: China cannot legally purchase advanced NVIDIA chips. The intended outcome: Chinese AI development slows for lack of compute. The second-order effect (predictable from historical technology embargoes): export controls create powerful incentives for the targeted nation to develop domestic alternatives, since dependence on the embargoing nation's technology is now revealed as a strategic vulnerability. The Soviet Union's development of nuclear weapons was accelerated by the perception of American nuclear monopoly; Japan's wartime autarky drove domestic industrial development; China's response to TSMC concentration risk has been the multi-billion dollar investment in SMIC and domestic chip development. Huawei's 2023 release of a 7nm domestically manufactured chip demonstrated that the second-order effect was already producing results. Export controls impose real costs but also create the urgency that drives the adaptation they were intended to prevent.
Concept Tested: U.S. Export Controls / Second-Order Thinking / Technology Policy
4. The Stuxnet cyberattack (discovered 2010) destroyed approximately 1,000 Iranian uranium centrifuges while reporting normal operation to monitoring systems. Which assessment BEST describes what Stuxnet demonstrated about the nature of cyber warfare?¶
- Stuxnet demonstrated that cyber weapons are primarily espionage tools — its primary value was the intelligence it gathered about Iranian nuclear operations rather than the physical damage it caused
- Stuxnet demonstrated that cyber weapons could achieve physical effects previously requiring military strikes, while operating covertly and with plausible deniability — blurring the distinction between peacetime espionage and acts of war in ways international law had not anticipated
- Stuxnet demonstrated that only state actors with enormous resources can develop effective cyber weapons — the sophistication required ensures that cyber warfare remains exclusively within the capability of advanced industrial nations
- Stuxnet was primarily significant as a deterrent — its demonstration that Western nations could destroy Iranian nuclear infrastructure without military strikes persuaded Iran to negotiate the JCPOA nuclear agreement
Show Answer
The correct answer is B. Stuxnet represented a genuine military-technological threshold: the first documented cyberweapon that caused physical destruction. Previous cyber operations had stolen information, disrupted services, or damaged software — Stuxnet physically destroyed hardware (centrifuges) by causing them to operate outside their design parameters while spoofing their monitoring systems. This blurred the distinction between espionage (traditionally tolerated under international norms) and acts of war (which can justify armed response). If Iran had the right to retaliate militarily against the U.S. and Israeli infrastructure that was clearly responsible — but that neither nation acknowledged — international law had no clear answer. Stuxnet also demonstrated that cyber weapons can escape their intended targets: it eventually spread globally, infecting systems far beyond the Iranian nuclear facility. This "weapons proliferation" problem is inherent to software — unlike physical weapons, code can be copied and repurposed.
Concept Tested: Stuxnet / Cyber Warfare / International Law
5. The Russia-Ukraine war has demonstrated that inexpensive commercial FPV drones (costing \(500–\)3,000) can substitute for expensive artillery in many tactical roles. Applying the "Work, Exchange, and Technology" lens, which historical pattern does this illustrate?¶
- Cheap consumer technology consistently fails in military applications — the drone's success in Ukraine is an anomaly that reflects the specific terrain and doctrinal failures of the Russian military rather than a generalizable pattern
- Technological innovation consistently produces cheaper substitutes for expensive military systems — just as rifled muskets made Napoleonic-era cavalry charges obsolete, and aircraft carriers displaced battleships, cheap accurate drones are disrupting the economics of land warfare in ways that will force military doctrine and procurement to adapt
- The primary driver of drone success in Ukraine is the Starlink satellite communication network rather than the drones themselves — the technology combination is unique to this conflict and cannot be generalized
- The democratization of drone technology has primarily benefited non-state actors (terrorist groups, criminal organizations) more than conventional militaries — Ukraine's experience is exceptional because it is fighting a conventional military that has not adapted to cheap drone threats
Show Answer
The correct answer is B. The "Work, Exchange, and Technology" lens reveals that military technology economics follow the same disruption pattern as civilian technology: incumbent expensive systems face competition from cheaper alternatives that achieve similar or superior results for specific applications. The rifled musket made expensive trained cavalry charges suicidal against infantry; aircraft carriers displaced battleships as the dominant naval capital ship; precision-guided munitions made unguided bombing far less effective for the cost. FPV drones costing \(500–\)3,000 are performing artillery-equivalent roles in Ukraine — finding, targeting, and destroying vehicles and personnel at tactical ranges. The units that can deploy thousands of cheap drones are achieving effects that previously required expensive artillery systems and trained crews. Military establishments optimized around expensive platforms (tanks, artillery, fighter aircraft) face the disruptive challenge of adapting to a world in which cheap, abundant, increasingly autonomous systems can perform many of their functions at a fraction of the cost.
Concept Tested: Drone Warfare / Work Exchange and Technology / Military Disruption
6. Autonomous weapons systems (AWS) that select and engage targets without human intervention raise the question of whether machines should be permitted to make the decision to kill. Applying ethical reasoning from the historical record, which argument AGAINST full autonomy has the strongest grounding in democratic and legal principles?¶
- Autonomous weapons are unreliable — current AI systems make too many errors in target identification to be trusted with lethal decisions, and the argument against them is primarily technical rather than principled
- Democratic accountability requires that lethal force be traceable to human decision-makers who can be held responsible — fully autonomous weapons systems create an accountability gap in which no human is responsible for a killing, making legal and moral accountability for war crimes structurally impossible
- Autonomous weapons violate international humanitarian law's prohibition on weapons that cause unnecessary suffering — the speed of autonomous targeting decisions will inherently produce disproportionate casualties compared to human-paced targeting
- Military effectiveness is maximized when humans retain targeting authority — autonomous systems make worse tactical decisions than trained human soldiers because they cannot exercise the contextual judgment that distinguishes combatants from civilians in complex environments
Show Answer
The correct answer is B. The strongest principled argument against fully autonomous lethal weapons is the accountability gap they create. Democratic and legal accountability for the use of lethal force depends on being able to identify the human decision-maker responsible for a specific killing and hold them accountable — through courts martial, international criminal law, or democratic political accountability. A fully autonomous system that selects and kills a target without human approval creates a situation in which no human made the specific lethal decision: the programmer wrote general code, the commander deployed the system, but neither made the decision to kill this specific person at this specific moment. International humanitarian law's requirement that combatants distinguish between combatants and civilians, and be accountable for failures to do so, presupposes human decision-makers. Fully autonomous weapons make those requirements structurally unenforced — and potentially unenforceable.
Concept Tested: Autonomous Weapons / Human-in-the-Loop / Democratic Accountability
7. AI-generated disinformation — false content produced by LLMs at scale, tailored to specific audiences — threatens what this chapter calls "the epistemic commons." Applying the misinformation detection framework, which institutional response is MOST necessary alongside individual critical thinking skills?¶
- Government regulation prohibiting AI-generated content would solve the epistemic commons problem — if platforms are required to remove AI-generated disinformation, individual critical thinking skills become unnecessary
- Individual critical thinking (lateral reading, source triangulation, propaganda analysis) is necessary but not sufficient — institutional responses (platform accountability for AI-generated content, disclosure requirements, investment in investigative journalism, information literacy education) are required to address the scale at which AI enables disinformation production
- The solution is technological — AI detection tools that identify AI-generated content will solve the epistemic commons problem because users will be automatically warned about suspicious content
- The epistemic commons problem existed before AI through traditional propaganda and is not qualitatively different in the AI era — the same individual skills that addressed traditional propaganda are sufficient for AI-generated disinformation
Show Answer
The correct answer is B. The individual/institutional complementarity is a recurring theme in democratic theory: individual virtues and skills are necessary but not sufficient for democratic functioning; institutional structures that make individual virtues possible and effective are also required. Individual critical thinking (lateral reading, source triangulation) developed throughout this textbook remains essential — but AI enables disinformation production at such scale (personalized content for millions of users simultaneously, in any language, at negligible cost) that individual verification cannot keep pace. Institutional responses address the structural problem at the production and distribution level: platform accountability creates incentives for platforms to reduce disinformation rather than amplify it; disclosure requirements allow users to apply appropriate skepticism to AI-generated content; investigative journalism provides verified information that can anchor the information environment. Neither individual skills nor institutional structures alone are sufficient; both are necessary.
Concept Tested: AI Disinformation / Epistemic Commons / Individual and Institutional Responses
8. The SolarWinds attack (2020) inserted malicious code into a software update used by 33,000 organizations. Applying systems thinking to supply chain fragility, which second-order vulnerability does this attack illustrate that individual security practices cannot address?¶
- Individual organizations' failure to audit their software vendors created the vulnerability — better procurement standards would have detected the malicious update before deployment
- Software supply chain attacks exploit the trust inherent in vendor relationships that individual organizations cannot verify — when trusted software is the attack vector, the organizations that carefully follow security best practices (installing vendor updates promptly) are precisely the ones most exposed, creating a structural vulnerability that individual practices cannot resolve
- The SolarWinds attack succeeded because CISA (the Cybersecurity and Infrastructure Security Agency) failed to monitor federal network traffic — the vulnerability was a government oversight failure rather than a structural supply chain problem
- Organizations that used SolarWinds' Orion software had accepted unnecessary risk by using a third-party network management tool rather than developing internal alternatives — supply chain risk can be eliminated by reducing third-party software dependencies
Show Answer
The correct answer is B. The SolarWinds attack reveals a structural vulnerability that emerges from the trust mechanisms modern software infrastructure depends on. Software updates are security-critical: organizations are taught to install them promptly because they contain security patches. The SolarWinds attackers exploited this trust by inserting malicious code into the update itself — so the organizations that followed best security practices by promptly installing updates were the ones that deployed the attack. This is a second-order effect of the software supply chain's structure: security at the individual organization level (careful update practices, network monitoring) presupposes the integrity of the supply chain itself. Once the supply chain is compromised, individual security practices are ineffective or even counterproductive. Addressing this structural vulnerability requires solutions at the supply chain level — auditing software build processes, securing code signing, monitoring supplier security — rather than at the individual organization level.
Concept Tested: SolarWinds Attack / Supply Chain Fragility / Systems Thinking
9. Eisenhower's 1961 warning about the military-industrial complex (Chapter 16) is being transformed by AI, with technology companies like Google and Microsoft entering defense contracting. Applying historical comparison, which feature of the AI-era military-industrial complex represents a genuine change from the Cold War version?¶
- The AI-era military-industrial complex is less concerning than the Cold War version — technology companies are smaller and less politically connected than the Cold War defense contractors, so the reinforcing feedback loop Eisenhower warned about is weaker
- The Cold War military-industrial complex involved companies whose entire business model was defense contracting; AI-era technology companies have both commercial products and large non-defense workforces whose values may conflict with military applications — creating internal employee opposition to military contracts that was absent in the Cold War defense industry
- The AI-era military-industrial complex is more geographically concentrated in Silicon Valley than the Cold War version was — creating a regional economic dependency that gives technology companies disproportionate political influence in California congressional districts
- AI companies, unlike Cold War defense contractors, are primarily non-unionized — the absence of labor organizing in Silicon Valley means the military-industrial complex feedback loop Eisenhower identified cannot develop in the AI sector
Show Answer
The correct answer is B. Historical comparison reveals what is genuinely new about the AI-era military-industrial dynamic. Cold War defense contractors (Lockheed, Boeing, Raytheon) were purpose-built defense companies whose workers generally understood and accepted that their employer served military purposes. The reinforcing feedback loop Eisenhower identified — defense contractors, military services, and Congress members with aligned interests in perpetually increasing defense budgets — operated without significant internal opposition. AI-era technology companies have dual-use products (cloud computing, AI services, software) and employ engineers who entered the technology sector for civilian purposes and who may hold values in conflict with weapons applications. Google employees' 2018 protest of Project Maven (AI for drone imagery analysis) forced Google to withdraw from the program — an unprecedented development in which employee opposition constrained a defense contract. This internal tension is genuinely new and changes the dynamics of defense contracting in ways the Cold War model did not include.
Concept Tested: Military-Industrial Complex / Historical Comparison / AI Era
10. The "Work, Exchange, and Technology" capstone synthesis identifies a consistent historical pattern: major technologies create new winners and losers, concentrate power, disrupt labor, and eventually generate political responses. Applying this pattern to AI and evaluating its predictive power, which assessment demonstrates the HIGHEST level of historical thinking?¶
- The pattern is completely reliable — because every previous technology followed it, AI will necessarily follow it, and we can confidently predict that AI will produce Progressive Era-style regulation within 20 years of its mainstream deployment
- The pattern is analytically useful but requires careful evaluation of what is structurally similar and what is genuinely different — AI's unprecedented speed, its application to cognitive rather than physical labor, its geopolitical dimension, and its concentration in a tiny number of companies may produce political responses with different forms and timelines than previous technology transitions
- The historical pattern is not useful for understanding AI because AI is so fundamentally different from previous technologies that no historical analogy can capture its significance — AI is genuinely unprecedented and requires entirely new analytical frameworks
- The historical pattern is useful primarily for understanding why AI will not produce political responses — because previous technology transitions show that political responses came too late to prevent inequality, the same will be true of AI, and historical analysis confirms that democratic institutions cannot regulate transformative technologies effectively
Show Answer
The correct answer is B. The highest level of historical thinking is evaluating the analytical power and limits of historical patterns — neither assuming they apply mechanically nor dismissing them as irrelevant. The technology-disruption-response pattern provides genuine analytical purchase on AI: it predicts labor disruption, power concentration, and eventual political response, and these predictions appear well-grounded in AI's early trajectory. But the pattern also has limits for AI: previous technology transitions occurred in national economies that could regulate domestically; AI development is a primary geopolitical contest that limits what any single nation can do unilaterally. Previous disruptions affected specific industries; AI affects cognitive work across virtually all industries simultaneously, compressing the disruption timeline. The concentration of AI capability in a tiny number of companies (three cloud providers, two dominant model developers) is more extreme than railroad or oil trust concentration. Evaluating what historical patterns illuminate and where they must be supplemented by analysis of genuinely new features is precisely the kind of critical, evaluative historical thinking this textbook has aimed to develop.
Concept Tested: Work Exchange and Technology / Historical Pattern Evaluation / Synthesis