References: Responsible and Ethical Use of AI¶
-
Algorithmic bias - Wikipedia - Comprehensive overview of bias types, real-world incidents, and mitigation approaches. Foundation for the chapter's fairness content.
-
Explainable artificial intelligence - Wikipedia - Detailed treatment of XAI techniques (SHAP, LIME, counterfactuals) and their role in transparency. Anchors the chapter's explainability section.
-
AI safety - Wikipedia - Coverage of AI safety as a field, including alignment, robustness, and oversight. Supports the chapter's broader responsible-AI framing.
-
The Alignment Problem - Brian Christian - W. W. Norton - Accessible book-length treatment of how AI systems learn the wrong objectives and how alignment research addresses it; excellent companion reading for the chapter.
-
Weapons of Math Destruction - Cathy O'Neil - Crown - Influential examination of high-stakes algorithmic decision-making in lending, hiring, education, and criminal justice; required reading for the chapter's fairness content.
-
NIST AI Risk Management Framework 1.0 - NIST - Authoritative source for the AI RMF that this chapter teaches, including the Govern-Map-Measure-Manage functions.
-
ISO/IEC 42001:2023 AI Management Systems - ISO - Official source for the AI management system standard referenced throughout this chapter. The first ISO standard for AI governance.
-
Microsoft Responsible AI Principles - Microsoft - Major-vendor framework for responsible AI including fairness, reliability, privacy, inclusion, transparency, and accountability. Useful contrast to NIST framing.
-
Google AI Principles - Google - Counterpart vendor framework with concrete commitments and review processes. Useful for the chapter's vendor-evaluation outcomes.
-
Partnership on AI - Partnership on AI - Multi-stakeholder organization producing case studies and guidance on responsible AI deployment. Strong source for AI-incident analysis assignments.