Answer Cards
Comprehensive, evidence-based answers to critical questions about AI security, governance, and compliance for enterprise organizations.
AI Assurance: Evidence, Controls, and Reviews
AI assurance demonstrates that AI systems meet defined objectives and manage risks. It relies on evidence: policies, risk decisions, test results, deployment approvals, monitoring, and incident/CAPA records. ISO 42001 provides management-system requirements; NIST AI RMF informs risk framing and measures.
A Practical AI Governance Framework for SaaS
AI governance aligns roles, risk, controls, and assurance for systems using ML/LLMs. A practical framework uses one policy backbone, clear accountability, risk taxonomy, change gates, human oversight, logging, incident handling, and continual improvement. It should map to ISO 42001 and be informed by NIST AI RMF.
AI Incident Response: The First Hour
When AI behavior causes harm or near-miss, treat it as an incident. Stabilize the system, preserve evidence, classify severity, notify stakeholders, and start corrective actions. Capture prompts, retrieved context, model/version, tool calls, approvals, and logs. Align with your AIMS and security IR processes.
AI Security: Threats, Controls, and Evidence
AI security addresses threats like prompt injection, data exfiltration, model misuse, supply-chain risks, and unsafe tool calls. Controls include input handling, retrieval hardening, capability gating, authZ, output validation, monitoring, incident response, and secure change management. Governance (ISO 42001) ensures these are designed, operated, and reviewed.
EU AI Act: Provider vs Deployer Obligations
The EU AI Act distinguishes between providers (who develop, import, or substantially modify AI systems) and deployers (who use AI systems for their intended purpose). Providers bear primary responsibility for compliance, risk assessment, documentation, and CE marking. Deployers must ensure appropriate use, human oversight, and impact assessments for high-risk systems. Understanding your role determines your obligations under the regulation.
ISO 42001: The AI Management System (AIMS) Standard
ISO 42001 defines requirements for an AI Management System (AIMS) that governs AI systems across their lifecycle. It focuses on policy, roles, risk management, lifecycle controls, monitoring, and continual improvement. It complements ISO 27001 (information security) by adding AI-specific governance and assurance. Organizations scope AI systems, assign accountable roles, manage risks (e.g., prompt injection, misuse, data lineage), implement controls (testing, logging, oversight), and review performance with corrective actions.
Implementing ISO 42001 in 90 Days
A pragmatic 90-day path stands up core AIMS scaffolding: scope, policy & roles, risk register, controls & testing, and assurance loop. Day-to-day, reuse your existing management-system backbone (from ISO 27001 if present) and bolt on AI-specific risk taxonomy, change gates, evidence capture, and review cadence. Download the detailed [90-day checklist CSV](/checklists/iso42001-90-day-plan.csv) for actionable tasks with owners and success criteria.
ISO 42001 vs ISO 27001: What's the Difference?
ISO 27001 focuses on information security management systems (ISMS), while ISO 42001 specifically addresses AI management systems (AIMS). ISO 42001 complements ISO 27001 by adding AI-specific governance, risk taxonomy, and lifecycle controls. Organizations typically implement both: ISO 27001 for foundational security controls and ISO 42001 for AI-specific governance. The standards share management system structure but differ in scope, risk considerations, and control objectives.
LLM Security: Patterns and Pitfalls
LLM applications fail when instructions are not isolated, context is unsanitized, tools are over-privileged, or outputs are trusted blindly. Use instruction isolation, input/output filters, retrieval hardening, tool allow-lists with least privilege, and human-in-the-loop for sensitive actions. Test continuously with reproducible attacks.
How should organizations apply the NIST AI Risk Management Framework?
The NIST AI Risk Management Framework (AI RMF) provides a voluntary approach for governing AI risk across the system lifecycle. It anchors risk programs in the Govern function (policies, risk tolerance, accountability), then guides teams through Map, Measure, and Manage activities to contextualize use cases, evaluate risks with metrics, and respond to exposure. The framework promotes sociotechnical collaboration by integrating legal, compliance, engineering, security, and domain experts around trustworthy characteristics like safety, security, privacy, fairness, and transparency.
NIST AI RMF vs ISO 42001: How They Fit
NIST AI RMF is guidance for framing and managing AI risks; ISO 42001 is a certifiable management system for governing AI. Use RMF to define risk functions, measures, and profiles; use ISO 42001 to institutionalize policy, roles, controls, and assurance.
Prompt Injection Defense: Architecture and Controls
Prompt injection attacks manipulate AI systems by embedding malicious instructions in user inputs or retrieved content. Defense requires layered controls: instruction isolation, input sanitization, output validation, privilege limitation, and monitoring. Use structured prompts, content filtering, semantic analysis, and human oversight for sensitive operations. Test continuously with attack patterns and maintain response procedures for incidents.
Who taught the lie in RAG—and how do you trace it?
RAG systems are easily steered by poisoned texts in their knowledge bases. RAGOrigin introduces a black-box responsibility attribution method that, after a misgeneration, narrows the suspect set of documents and assigns each a responsibility score using three signals: retrieval similarity, semantic correlation, and generation influence. It then separates poisoned from benign texts via unsupervised clustering with a dynamic threshold. Evaluated on five QA datasets plus a 16.7M-document database, RAGOrigin consistently achieves top detection accuracy with low false positives across nine attacks, remaining fast enough for operational use. Download the [attribution runbook CSV](/checklists/rag_poisoning_attribution_runbook.csv) for implementable steps.
Secure RAG: Architecture and Hardening
RAG blends model reasoning with retrieved context. Risks arise when retrieved content carries hidden instructions, sensitive data, or untrusted links. Harden by sanitizing inputs, signing/whitelisting sources, chunking and metadata controls, query filtering, and output validation. Log retrievals for forensics.