Skip to content

References: AI and Information Security

  1. Adversarial machine learning - Wikipedia - Comprehensive overview of adversarial attacks against ML systems including evasion, poisoning, and model extraction. Anchors the chapter's threat-model framing.

  2. Prompt injection - Wikipedia - Detailed coverage of prompt injection attacks against LLMs, including direct and indirect variants. Foundation for the chapter's LLM-security content.

  3. Data poisoning - Wikipedia - Coverage of training-data poisoning attacks, defenses, and supply-chain implications. Supports the chapter's training-time-attack section.

  4. Adversarial Machine Learning - Anthony D. Joseph et al. - Cambridge University Press - Comprehensive academic reference on adversarial ML threats and defenses; the most thorough single source for the chapter's threat-modeling content.

  5. Not With a Bug, But With a Sticker - Ram Shankar Siva Kumar and Hyrum Anderson - Wiley - Practitioner reference on real AI security incidents, MITRE ATLAS, and defensive playbooks; an excellent narrative complement to academic adversarial-ML literature.

  6. OWASP Top 10 for LLM Applications - OWASP - The canonical list of LLM application security risks, central to this chapter's threat-modeling content. Updated regularly as the field evolves.

  7. MITRE ATLAS - MITRE - Adversarial threat landscape for AI systems, with case studies and tactics analogous to MITRE ATT&CK. Directly cited in this chapter's threat-modeling exercises.

  8. NIST AI 100-2 Adversarial Machine Learning - NIST - Authoritative US government taxonomy of attacks on AI systems and corresponding mitigations. Required reading for the chapter's evaluate-level outcomes.

  9. Google Secure AI Framework (SAIF) - Google - Major-vendor framework for securing AI systems across the development lifecycle. Useful for the chapter's secure-AI-SDLC content.

  10. LLM Red Teaming Best Practices - Microsoft Learn - Practical guide to red-teaming generative AI systems, including methodology, scope, and reporting. Reinforces the chapter's red-teaming apply-level outcomes.