✦ ISO/IEC 42001:2023

ISO 42001 Checklist: All 38 Annex A Controls + AI Governance Implementation Guide

The most complete ISO 42001 checklist available. Every control, the quickest way to implement it, a printable PDF, certification timeline, and expert FAQ β€” free.

38
Annex A Controls
9
Control Domains
6–12
Months to Certify
2023
Standard Version
↓ Full Checklist Timeline FAQ Download PDF Free Gap Assessment

Free Download: Print or save this checklist as a PDF to use in your AIMS implementation.

What You Need to Know

How to Use This ISO 42001 Checklist

ISO 42001:2023 is the world's first certifiable standard for an AI Management System (AIMS). Like ISO 27001, it follows the Plan-Do-Check-Act cycle across Clauses 4–10, with Annex A providing 38 controls across 9 domains. You select applicable controls based on your AI risk assessment β€” you don't implement all 38. Use this checklist to track progress, prioritize quick wins, and build your audit evidence package.

πŸ€–
Who needs ISO 42001?
Any organization that develops, deploys, or uses AI systems. This includes AI product companies, SaaS companies with AI features, enterprises using AI for decisions, and AI service providers. It's especially valuable for organizations selling to regulated industries or EU customers under the AI Act.
πŸ”—
ISO 42001 vs ISO 27001
Both use the same high-level structure (HLS), so if you have ISO 27001 you're already fluent in AIMS language. ISO 27001 covers information security risks β€” ISO 42001 covers AI-specific risks: bias, transparency, model drift, unintended consequences, and ethical impact. Many organizations pursue both together.
πŸ“‹
Statement of Applicability
Like ISO 27001, you must produce a Statement of Applicability (SoA) listing all 38 controls and documenting which apply, why, and their implementation status. Exclusions must be justified. Auditors use this as their roadmap.
⚑
Quick Win Strategy
Start with policy and governance controls (Domain A.2, A.3) β€” these are documentation-heavy but low technical effort. They're also what auditors check first and what enterprise customers ask for in security questionnaires. You can have these done in weeks.
Want a tailored assessment?
Get a personalized implementation timeline, tooling recommendations, and estimated cost based on your company β€” free.
Complete the Free Gap Assessment β†’

The Full List

All 38 Annex A Controls

Check off controls as you implement them. Filter by domain or effort level. Each control includes the fastest first step for a small to mid-size organization.

0%

Your Implementation Progress

Check boxes as you complete each control. Progress saves in your browser.

0
Done
38
Remaining
A.2
AI Policy
3 controls
β–Ό
A.2.1
Establish an AI policy
Document a policy covering the development or use of AI systems, aligned with business objectives and ethical principles.
QUICKESTDraft a 1–2 page AI Policy covering your organization's purpose for using AI, ethical commitments, risk appetite, and governance responsibilities. Get executive sign-off. Use a template β€” takes 2–3 days.
Low effort
A.2.2
Ensure alignment with other organizational policies
Align your AI policy with existing policies such as information security, data protection, and HR.
QUICKESTReview existing policies (information security, privacy, acceptable use) and add cross-references to your AI policy. Identify any conflicts and resolve them. Document the alignment review.
Low effort
A.2.3
Regularly review the AI policy
Establish a process to review and update the AI policy at planned intervals or when significant changes occur.
QUICKESTSchedule an annual AI policy review in your governance calendar. Document the review process and assign an owner. Keep a version-controlled record of each review.
Low effort
A.3
Internal Organisation
2 controls
β–Ό
A.3.1
Define AI roles and responsibilities
Assign and document roles and responsibilities for AI system development, deployment, and governance across the organization.
QUICKESTCreate a RACI chart for AI governance: assign an AI Owner, Data Scientist/ML Lead, AI Ethics reviewer, and Legal/Compliance contact. Document in your AIMS. Takes one day.
Low effort
A.3.2
Establish reporting procedures for AI concerns
Define and implement a process for reporting concerns related to AI systems β€” including ethical issues, bias, and safety incidents.
QUICKESTCreate an ai-concerns@yourcompany.com mailbox and a Slack channel. Document the escalation path. Train all staff on what to report. This mirrors your security incident reporting process.
Low effort
A.4
Resources for AI Systems
3 controls
β–Ό
A.4.1
Document AI system components and assets
Maintain an inventory of all AI system components including models, datasets, infrastructure, and third-party tools.
QUICKESTBuild an AI asset register listing every model, training dataset, inference endpoint, and third-party AI service. Assign an owner to each. A spreadsheet works for a first pass β€” GRC tools can automate this.
Med effort
A.4.2
Manage data resources including provenance and quality
Document data sources, provenance, quality requirements, and management processes for data used in AI systems.
QUICKESTCreate a data lineage document for each model: where training data came from, how it was cleaned, quality checks applied, and known limitations. Tools like Great Expectations or dbt tests can automate quality checks.
High effort
A.4.3
Ensure adequate tooling and computing resources
Verify that necessary tools and computing resources are documented, available, and managed for AI system operation.
QUICKESTDocument your AI infrastructure (cloud compute, GPU resources, MLOps tooling). Include capacity planning for model training and inference. Ensure resource access controls are in place.
Med effort
A.5
AI System Impact Assessment
2 controls
β–Ό
A.5.1
Conduct AI system impact assessments
Regularly evaluate the potential societal, ethical, and operational impacts of AI systems on individuals and organizations.
QUICKESTCreate an AI Impact Assessment template and complete it for each AI system before deployment. Cover: intended use, affected populations, potential harms, bias risks, and mitigation measures. Required before any significant AI release.
Med effort
A.5.2
Document and review impact assessment results
Maintain records of impact assessments and review them when the AI system or its context changes significantly.
QUICKESTStore completed impact assessments in a central repository (Confluence, SharePoint). Set a trigger-based review process: reassess when model retrained, scope changes, or incidents occur.
Med effort
A.6
AI System Lifecycle
5 controls
β–Ό
A.6.1
Define the AI system lifecycle
Document the stages of your AI system lifecycle from concept through decommissioning, with defined processes at each stage.
QUICKESTDocument your ML lifecycle stages: problem definition β†’ data collection β†’ model development β†’ testing β†’ deployment β†’ monitoring β†’ retirement. Assign owners and define gates between stages.
Med effort
A.6.2
Manage AI system design and development
Apply governance controls throughout the design and development of AI systems, including requirements, architecture, and testing.
QUICKESTIntegrate AI governance checkpoints into your existing SDLC. Require: documented model requirements, architecture review, fairness testing, and sign-off before promotion to production.
High effort
A.6.3
Document AI system verification and validation
Define and document processes for verifying that AI systems meet requirements and validating that they perform as intended in real-world conditions.
QUICKESTDocument your model evaluation framework: metrics used, test datasets, acceptance thresholds, and sign-off criteria. Include both technical performance and fairness metrics. Retain evaluation records for audit.
High effort
A.6.4
Manage AI system deployment
Establish controls for the deployment of AI systems, including approval processes, rollout procedures, and rollback capabilities.
QUICKESTAdd AI-specific gates to your CI/CD pipeline: require impact assessment sign-off, test results, and a rollback plan before any AI model goes to production. Document the deployment decision and approver.
High effort
A.6.5
Manage AI system retirement and decommissioning
Define processes for retiring AI systems, including data retention, model archival, and communication to affected parties.
QUICKESTCreate a model retirement checklist: notify affected users, archive model artifacts, document retirement rationale, handle data according to retention policy, and update your AI asset register.
Med effort
A.7
Data for AI Systems
6 controls
β–Ό
A.7.1
Document data quality requirements
Define and document requirements for the quality of data used in AI systems, including accuracy, completeness, and consistency standards.
QUICKESTFor each AI system, document: what constitutes acceptable data quality, minimum dataset size, class balance requirements, and known quality gaps. Implement automated data quality checks in your pipeline.
Med effort
A.7.2
Establish data provenance processes
Record and maintain the provenance of data used in AI systems, including sources, transformations, and usage history.
QUICKESTImplement data lineage tracking using tools like Apache Atlas, dbt, or a custom metadata store. Document the origin, licensing, and transformation history for every training dataset. This is a top auditor check.
High effort
A.7.3
Document data acquisition and selection
Document the process for acquiring and selecting data for AI systems, including categories, sources, and characteristics.
QUICKESTCreate a data acquisition policy covering: permissible data sources, legal basis for use (consent, license), selection criteria, and rejection criteria. Document for each dataset used in production models.
High effort
A.7.4
Define data preparation methods
Document how data is cleaned, labelled, and transformed for use in AI systems.
QUICKESTDocument your data preprocessing pipeline: cleaning steps, labelling methodology, augmentation techniques, and train/test split strategy. Version-control preprocessing scripts alongside model code.
High effort
A.7.5
Assess data representativeness and bias
Evaluate whether training data is representative of the real-world population the AI system will affect, and identify potential bias sources.
QUICKESTRun demographic parity and equalized odds checks on training data and model outputs. Document known representation gaps. Use Fairlearn, IBM AI Fairness 360, or similar tools. Document findings even if imperfect.
Med effort
A.7.6
Manage ongoing data for deployed AI systems
Establish processes for monitoring and managing data quality and drift in AI systems after deployment.
QUICKESTImplement data drift monitoring in production using tools like Evidently AI, Fiddler, or AWS SageMaker Model Monitor. Set alerts for when input distribution shifts beyond acceptable thresholds.
Med effort
A.8
Information for AI Systems
4 controls
β–Ό
A.8.1
Communicate AI system information to users
Provide relevant information about AI systems to users including purpose, limitations, and how to use them safely.
QUICKESTPublish a model card or system card for each AI system: what it does, what it doesn't do, known limitations, confidence thresholds, and instructions for safe use. Make it accessible to all affected users.
Med effort
A.8.2
Document technical limitations and intended use
Clearly document technical limitations, performance boundaries, and intended use cases for each AI system.
QUICKESTCreate an "intended use" and "out-of-scope use" section in your model documentation. Include performance degradation conditions, edge cases, and prohibited applications. Review when model changes.
Med effort
A.8.3
Maintain AI system documentation
Maintain up-to-date documentation for all AI systems throughout their lifecycle.
QUICKESTUse a model registry (MLflow, W&B, SageMaker) to version-control model artifacts, parameters, metrics, and documentation. Require documentation updates as part of any model change process.
Med effort
A.8.4
Document monitoring capabilities
Document the monitoring capabilities of AI systems, including what can and cannot be observed about system behaviour.
QUICKESTDocument what metrics are monitored, what alerts are configured, what is observable vs opaque, and the escalation path when anomalies are detected. Include this in your model card.
Med effort
A.9
Human Oversight of AI Systems
4 controls
β–Ό
A.9.1
Define human oversight requirements
Determine the level of human oversight required for each AI system based on its risk profile and decision-making impact.
QUICKESTClassify each AI system by decision risk: high-stakes decisions (healthcare, hiring, credit) require human-in-the-loop. Document the oversight model for each system: human-in-loop, human-on-loop, or fully automated with monitoring.
Med effort
A.9.2
Implement human intervention mechanisms
Provide mechanisms for humans to intervene, override, or shut down AI systems when necessary.
QUICKESTImplement override and kill-switch capabilities in all AI systems. Document who has authority to override, how to do it, and the process for reviewing overridden decisions. Test the override mechanism regularly.
Med effort
A.9.3
Train personnel responsible for AI oversight
Ensure that personnel responsible for overseeing AI systems are appropriately trained to understand system behaviour and identify anomalies.
QUICKESTDevelop role-specific AI literacy training for: reviewers of AI decisions, managers of AI systems, and executives responsible for AI governance. Document completion. Annual refresher required.
Low effort
A.9.4
Document oversight decisions and escalations
Maintain records of oversight decisions, human interventions, and escalations related to AI system behaviour.
QUICKESTCreate an AI oversight log: record all human overrides, escalations, and significant decisions about AI system behaviour. Review logs monthly. Auditors will ask to see this evidence.
Med effort
A.10
Third Party and Supplier Relationships
9 controls
β–Ό
A.10.1
Define responsibilities with AI suppliers
Clearly allocate responsibilities between your organization and AI suppliers, partners, and third-party AI service providers.
QUICKESTCreate a responsibility matrix for each AI supplier: who owns model governance, data handling, bias monitoring, incident response, and compliance. Include in supplier contracts and SLAs.
Med effort
A.10.2
Assess AI suppliers against ethical standards
Establish processes to evaluate AI suppliers and third-party providers against your organization's ethical and compliance standards.
QUICKESTAdd AI-specific questions to your vendor security questionnaire: bias testing practices, model transparency, data provenance, incident response, and regulatory compliance (EU AI Act). Tier vendors by AI risk level.
Med effort
A.10.3
Include AI requirements in supplier agreements
Incorporate AI governance requirements in agreements with suppliers who provide AI systems or components.
QUICKESTCreate a standard AI addendum for supplier contracts covering: acceptable use, bias and fairness obligations, transparency requirements, incident notification timelines, and audit rights. Apply to all AI-related suppliers.
Low effort
A.10.4
Monitor AI supplier performance
Regularly monitor and review AI supplier performance against agreed requirements and ethical standards.
QUICKESTSchedule annual AI supplier reviews covering: model performance, bias metrics, incident history, and compliance status. For high-risk AI suppliers, conduct quarterly check-ins. Document review findings.
Med effort
A.10.5
Manage AI system handover and transfer
Define processes for handing over or transferring AI systems between parties, including documentation and responsibility transfer.
QUICKESTCreate a handover checklist for AI system transfers: model artifacts, documentation, training data, impact assessments, known issues, and responsibility transfer confirmation. Required for any AI system sale or transition.
Med effort
A.10.6
Address customer and end-user concerns
Establish processes to receive, address, and track concerns and feedback from customers and end users about AI systems.
QUICKESTCreate a dedicated channel for AI-related customer concerns (separate from general support). Define escalation paths, response SLAs, and a process for investigating and resolving AI-specific complaints.
Low effort
A.10.7
Manage AI incident reporting with third parties
Define processes for managing AI incident reporting obligations to customers, regulators, and third-party stakeholders.
QUICKESTDocument AI incident notification obligations per jurisdiction (EU AI Act requires serious incident reporting). Create notification templates and assign a responsible person. Align with your existing security incident response process.
Med effort
A.10.8
Ensure responsible use by customers
Define acceptable and prohibited uses of your AI systems by customers and implement measures to discourage harmful use.
QUICKESTPublish an AI Acceptable Use Policy for customers. Define prohibited uses (discriminatory targeting, manipulation, illegal activity). Include in your terms of service and enforce through monitoring.
Med effort
A.10.9
Communicate AI governance to interested parties
Communicate relevant aspects of your AI governance approach to customers, partners, regulators, and other stakeholders.
QUICKESTPublish an AI transparency statement on your website covering: how you use AI, your governance approach, ethical commitments, and how to raise concerns. ISO 42001 certification itself is a strong signal β€” add it to your trust page.
Low effort
Want a tailored assessment?
Get a personalized implementation timeline, tooling recommendations, and estimated cost based on your company β€” free.
Complete the Free Gap Assessment β†’

How Long Does It Take

ISO 42001 Certification Timeline

Most organizations take 6–12 months to achieve ISO 42001 certification. If you already have ISO 27001, expect 3–6 months due to shared structure and overlapping controls. Here's a realistic phase-by-phase breakdown.

Phase 1 β€” Months 1–2
Foundation & Scoping
  • Appoint AI governance lead (CTO, Head of AI, or fractional)
  • Define AIMS scope β€” which AI systems are in scope?
  • Inventory all AI systems, models, and datasets
  • Run gap assessment against Clauses 4–10 + Annex A
  • Write AI Policy and assign governance roles
  • Establish AI concern reporting channel
⏱ 4–8 weeks Β· Documentation-heavy phase
Phase 2 β€” Months 2–6
Risk Assessment & Controls
  • Complete AI risk assessment for each in-scope system
  • Complete impact assessments for high-risk AI systems
  • Implement data governance controls (provenance, quality)
  • Build model cards and system documentation
  • Implement human oversight mechanisms
  • Write Statement of Applicability
⏱ 8–16 weeks Β· Most intensive phase
Phase 3 β€” Months 5–8
Internal Audit & Evidence
  • Deploy GRC platform or evidence management system
  • Conduct internal audit against all clauses and controls
  • Remediate audit findings
  • Management review of AIMS effectiveness
  • Assemble evidence package
  • Select and engage certification body
⏱ 4–8 weeks Β· Book auditor early
Phase 4 β€” Months 8–12
External Audit & Certification
  • Stage 1 audit (documentation review, 1–2 days)
  • Remediate Stage 1 findings
  • Stage 2 audit (implementation testing, 2–4 days)
  • Address non-conformities
  • Receive certificate (valid 3 years)
  • Publish AI governance transparency statement
⏱ 8–14 weeks Β· Certificate valid 3 years
Already have ISO 27001? Your timeline is shorter.
ISO 42001 uses the same High-Level Structure as ISO 27001 β€” the same clause framework, the same PDCA cycle, and overlapping documentation requirements. Organizations with ISO 27001 can typically achieve ISO 42001 in 3–6 months by reusing existing governance structures, risk management processes, and supplier management frameworks. The main additional work is AI-specific: model governance, impact assessments, data provenance, and bias monitoring.
Want a tailored assessment?
Get a personalized implementation timeline, tooling recommendations, and estimated cost based on your company β€” free.
Complete the Free Gap Assessment β†’

Common Questions

ISO 42001 FAQ

The most common questions from organizations pursuing ISO 42001 certification for the first time.

What is ISO 42001 and who needs it?+
ISO/IEC 42001:2023 is the world's first internationally recognized standard for an Artificial Intelligence Management System (AIMS). It provides a framework for organizations to responsibly develop, deploy, and use AI systems β€” covering governance, risk management, data quality, human oversight, and transparency. Any organization that builds AI products, deploys AI in decision-making, or uses AI services from third parties can benefit from ISO 42001. It's increasingly required by enterprise customers, regulators, and organizations selling to EU markets under the EU AI Act.
How is ISO 42001 different from ISO 27001?+
ISO 27001 manages information security risks β€” data breaches, unauthorized access, system vulnerabilities. ISO 42001 manages AI-specific risks β€” bias, model drift, lack of transparency, unintended consequences, and ethical impact. Both use the same High-Level Structure (clauses 4–10, PDCA cycle) so they're designed to work together. Having ISO 27001 doesn't mean you have ISO 42001 β€” but it gives you a strong head start on structure and documentation. Many organizations pursue both simultaneously to reduce overall effort.
Do I have to implement all 38 Annex A controls?+
No β€” you select controls based on your AI risk assessment and the nature of your AI systems. However, you must document your selection and exclusions in a Statement of Applicability (SoA), with justified reasons for any exclusions. Auditors use your SoA as their primary reference. Some controls will be clearly inapplicable β€” for example, a company that only uses AI for internal tools may exclude certain customer-facing transparency controls. But exclusions must be defensible.
How long does ISO 42001 certification take?+
For most organizations, 6–12 months from start to certification. If you already have ISO 27001, expect 3–6 months due to the shared governance structure. The timeline depends on: number and complexity of AI systems in scope, current maturity of AI governance, internal resource availability, and how quickly you can assemble an evidence package. The external audit itself (Stage 1 + Stage 2) typically takes 3–6 days of auditor time depending on scope.
What is an AI Impact Assessment under ISO 42001?+
An AI Impact Assessment evaluates the potential effects of an AI system on individuals, groups, and society β€” covering ethical risks, bias, privacy implications, and unintended consequences. ISO 42001 requires impact assessments to be conducted before deploying AI systems (especially high-risk ones) and reviewed when systems change significantly. It's similar in concept to a Data Protection Impact Assessment (DPIA) under GDPR, but specifically focused on AI risks rather than personal data risks.
How does ISO 42001 relate to the EU AI Act?+
The EU AI Act is regulation β€” it creates legal obligations for AI systems used in the EU, with strict requirements for "high-risk" AI systems. ISO 42001 is a voluntary standard β€” it provides a governance framework but doesn't create legal obligations on its own. However, ISO 42001 certification is increasingly recognized as evidence of compliance with AI governance requirements, and the EU AI Act's requirements for high-risk AI systems overlap significantly with ISO 42001 controls. Organizations selling to EU customers or regulated by the EU AI Act should treat ISO 42001 as a strong foundation for compliance.
What documentation does ISO 42001 require?+
Mandatory documentation includes: AI Policy, AIMS scope document, AI risk assessment methodology and results, AI risk treatment plan, Statement of Applicability, AI impact assessments for each in-scope system, model cards or system documentation, data provenance records, human oversight procedures, internal audit program and results, management review records, and nonconformity and corrective action records. If you have ISO 27001, much of the management system documentation can be shared or extended rather than created from scratch.
How often must ISO 42001 be renewed?+
ISO 42001 follows the same 3-year certification cycle as ISO 27001. Annual surveillance audits are required in years 1 and 2 to confirm the AIMS is being maintained and improved. A full recertification audit occurs in year 3. Given the rapid pace of AI development, many organizations conduct more frequent internal reviews β€” particularly after significant model changes, new AI system deployments, or AI-related incidents β€” to stay ahead of surveillance requirements.

Not sure where you stand on ISO 42001?

Run a free gap assessment and get your readiness score across ISO 42001, SOC 2, ISO 27001, and 30+ other frameworks in 10 minutes.

Start Your Free Gap Assessment β†’

Free forever Β· No sales calls Β· Instant results

Β© 2025 Gap Assessment Β· freegapassessment.com
For informational purposes only. Not legal or audit advice.