Skip to content

References: Responsible and Ethical Use of AI

  1. Algorithmic bias - Wikipedia - Comprehensive overview of bias types, real-world incidents, and mitigation approaches. Foundation for the chapter's fairness content.

  2. Explainable artificial intelligence - Wikipedia - Detailed treatment of XAI techniques (SHAP, LIME, counterfactuals) and their role in transparency. Anchors the chapter's explainability section.

  3. AI safety - Wikipedia - Coverage of AI safety as a field, including alignment, robustness, and oversight. Supports the chapter's broader responsible-AI framing.

  4. The Alignment Problem - Brian Christian - W. W. Norton - Accessible book-length treatment of how AI systems learn the wrong objectives and how alignment research addresses it; excellent companion reading for the chapter.

  5. Weapons of Math Destruction - Cathy O'Neil - Crown - Influential examination of high-stakes algorithmic decision-making in lending, hiring, education, and criminal justice; required reading for the chapter's fairness content.

  6. NIST AI Risk Management Framework 1.0 - NIST - Authoritative source for the AI RMF that this chapter teaches, including the Govern-Map-Measure-Manage functions.

  7. ISO/IEC 42001:2023 AI Management Systems - ISO - Official source for the AI management system standard referenced throughout this chapter. The first ISO standard for AI governance.

  8. Microsoft Responsible AI Principles - Microsoft - Major-vendor framework for responsible AI including fairness, reliability, privacy, inclusion, transparency, and accountability. Useful contrast to NIST framing.

  9. Google AI Principles - Google - Counterpart vendor framework with concrete commitments and review processes. Useful for the chapter's vendor-evaluation outcomes.

  10. Partnership on AI - Partnership on AI - Multi-stakeholder organization producing case studies and guidance on responsible AI deployment. Strong source for AI-incident analysis assignments.