ISO 42001 is a voluntary standard for AI governance; the EU AI Act is binding law. Both require risk management and documentation, but the Act adds prohibitions and conformity assessment. Here's how to build a parallel compliance program.
ISO 42001 and EU AI Act: Overlapping Governance and Compliance Roadmap
ISO 42001 and the EU AI Act overlap by roughly 40–50% on risk management, data governance, documentation, and human oversight, but the Act adds binding legal prohibitions on certain AI systems, mandatory conformity assessment for high-risk systems, and regulatory incident reporting that ISO 42001 does not address. Both frameworks are essential for organisations deploying AI in the EU: ISO 42001 provides the governance foundation, and the EU AI Act provides the legal compliance map. Implementing them in parallel — using ISO controls to structure Act compliance — is the most efficient path.
The biggest compliance gap is conformity assessment: the Act requires high-risk AI systems to undergo either internal assessment or third-party evaluation by a notified body. This is unique to the Act and has no equivalent in ISO 42001. A second major gap is AI literacy: the Act requires personnel involved with high-risk systems to demonstrate competency specific to Act obligations — not just ISO training.
This guide maps both frameworks, identifies where they diverge, and provides a 5-phase implementation roadmap so you can build compliance once and satisfy both frameworks without duplication.
Key Takeaways
40–50%
ISO 42001 controls satisfy roughly 40–50% of EU AI Act requirements
9 prohibited
EU AI Act explicitly bans 9 AI uses (social scoring, real-time facial recognition in public spaces, emotion recognition)
Annex VI
High-risk systems require conformity assessment (internal or notified body) — not required by ISO 42001
€30M fine
Non-compliance can result in fines up to €30 million or 6% of annual turnover
Definitions: What Is ISO 42001 and What Is the EU AI Act?+
ISO/IEC 42001:2023 is an international voluntary standard for AI management systems; the EU AI Act (Regulation 2024/1689) is binding EU law that applies to all AI systems affecting EU residents. ISO 42001 provides a framework for governance, risk management, and controls. The EU AI Act defines legal obligations for providers, deployers, and importers with specific requirements for high-risk systems and explicit prohibitions on certain AI uses.
ISO 42001 in Context
ISO 42001 applies the Plan-Do-Check-Act (PDCA) cycle to AI governance and risk management across your entire AI portfolio. It covers strategy, risk assessment, controls, training, incident management, and documented decision-making. Certification demonstrates that you have implemented a systematic approach to AI governance.
EU AI Act in Context
The EU AI Act establishes a risk-based legal framework with four categories: prohibited systems (banned entirely), high-risk systems (subject to strict requirements), limited-risk systems (transparency required), and minimal-risk systems (no specific requirements). It applies to all AI systems used in the EU, regardless of where they are developed, and carries fines up to €30 million or 6% of annual turnover for violations.
Areas of Overlap: Risk, Data, Oversight, Documentation+
Both frameworks require systematic risk assessment, data governance, human oversight, and extensive documentation — the core foundation of responsible AI. If you implement ISO 42001 rigorously in these four areas, you satisfy roughly 40–50% of the EU AI Act's compliance requirements without additional work.
Risk Assessment and Treatment
Both require identification of AI risks and documented mitigation. ISO 42001 (Clause 8.1) uses a standard risk management framework. The EU AI Act goes further by defining 10 specific high-risk use cases in Annex III (hiring, credit decisions, law enforcement, etc.) and requiring assessment of each. If your ISO 42001 risk register explicitly covers Annex III use cases, you're already partway compliant with the Act.
Data Governance and Quality
ISO 42001 control A.8 requires data governance, accuracy, and bias detection. The EU AI Act (Articles 10–13) requires training, validation, and test data sets to meet quality standards and be subject to governance. Both expect documented data provenance, bias measurement, and traceability.
Human Oversight
ISO 42001 control A.9 requires meaningful human oversight: humans with authority to override AI decisions, monitor accuracy, and act on concerns. The EU AI Act (Article 14) requires the same for high-risk systems. Both frameworks expect humans to understand what AI systems are doing and be able to intervene.
Documentation and Record-Keeping
Both frameworks are heavily documentary. ISO 42001 (Clause 7.5) requires documented information for all significant decisions and controls. The EU AI Act (Articles 11, 20) requires technical documentation, training records, test results, and incident logs. A single documentation repository covering both requirements is the most efficient approach.
ISO 42001 is a voluntary standard with no legal penalties; the EU AI Act is binding law with fines up to €30 million or 6% of turnover. The Act explicitly prohibits nine categories of AI systems, requires conformity assessment for high-risk systems (neither of which ISO 42001 addresses), and applies to all organisations using AI in the EU regardless of size or sector.
The Act's Nine Prohibited AI Uses
The EU AI Act (Articles 5–6) explicitly bans: social scoring systems, subliminal manipulation in advertising, emotion recognition in schools or workplaces, biometric categorisation in public spaces, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), and AI systems that enforce blanket gender discrimination. These prohibitions are absolute — no risk controls or ISO certification can override them. If your organisation uses any of these systems, you must cease use immediately to comply with the Act.
Conformity Assessment (Unique to the Act)
High-risk systems must undergo conformity assessment — either internal assessment (Annex VI checklist) or third-party evaluation by a notified body. This is a unique Act obligation with no equivalent in ISO 42001. Internal assessment is cost-effective but may lack credibility with regulators; notified body assessment provides third-party validation but costs €20,000–€100,000+ per system.
Post-Market Surveillance and Incident Reporting
The Act (Articles 22–23) requires providers of high-risk systems to implement post-market surveillance, detect and analyse incidents, and report serious incidents (causing death, serious injury, or significant property damage) to the EU AI Office. ISO 42001 includes incident management but not regulatory reporting to a government agency.
Build Both Frameworks in Parallel
ISO 42001 Implementation Services
Don't build two separate programmes. reconn delivers complete implementation: AI system audit, ISO 42001 deployment with Act-aware controls, conformity assessment setup, and post-market surveillance planning. Tailored to your organisation size and AI portfolio.
The largest gap is conformity assessment and prohibited systems: the Act requires third-party assessment of high-risk systems and mandates cessation of prohibited AI systems, neither of which ISO 42001 addresses. A second major gap is AI literacy: the Act (Article 4) requires personnel involved with high-risk systems to have documented competency and understanding of Act obligations — a training requirement that goes beyond ISO 42001.
A third gap is enforcement: the EU AI Office, national notified bodies, and Member State supervisory authorities conduct audits and inspections using Act-specific criteria. The evidence you compile for an ISO 42001 audit will be useful, but Act regulators will ask for Act-specific documentation (technical files, conformity reports, post-market surveillance plans).
A fourth gap is transparency for limited-risk systems: the Act (Articles 50–52) requires transparency disclosures when AI systems generate or interact with synthetic media. ISO 42001 does not address transparency to end-users; this obligation must be built into product design and communications separately.
Building Parallel Governance: 5-Phase Roadmap+
The most efficient approach is to run ISO 42001 and EU AI Act compliance programmes in parallel, using ISO controls to structure Act compliance and reducing duplication. Here's a practical 5-phase roadmap.
Phase 1: Audit Your AI Systems (0–4 weeks)
Inventory all AI systems. For each: (1) Does it fall under any EU AI Act prohibition? (2) Is it high-risk per Annex III? (3) Does it generate, process, or interact with synthetic media (limited-risk transparency)? Document findings in a single registry. This audit is independent of ISO 42001 but informs implementation scope.
Phase 2: Establish ISO 42001 with Act-Aware Controls (4–16 weeks)
Implement ISO 42001 with explicit Annex III coverage. Ensure risk assessments include all high-risk use cases. Design controls (Statement of Applicability) that document how ISO controls satisfy Act requirements. Build a single documentation repository that covers both frameworks.
Phase 3: Address Act-Specific Gaps (parallel with Phase 2)
Prohibited systems: Cease any systems from Phase 1 audit that fall under prohibited categories. Document the decision in your AI governance records.
Conformity assessment: For high-risk systems, establish a workflow for internal assessment (Annex VI) or notified body engagement. Create a conformity assessment template.
Post-market surveillance and incident reporting: Design a post-market surveillance plan and incident reporting workflow to the EU AI Office.
Phase 4: AI Competency & Training (4–12 weeks)
Expand training programmes to cover Act-specific competencies: recognising Annex III use cases, identifying prohibited systems, understanding conformity assessment workflows. Personnel involved with high-risk systems must demonstrate documented understanding of both ISO 42001 and Act obligations.
Phase 5: Alignment & Review (ongoing)
Use a single source of truth for documentation. Tag every document with both ISO 42001 and EU AI Act relevance. When audited (ISO or by regulators), you can respond quickly without scrambling to reconstruct evidence.
Master ISO 42001 with Act Alignment
ISO 42001 Lead Implementer Course
Learn how to design risk-based controls, document your AI management system, and map ISO 42001 controls to EU AI Act requirements. 4-day course, self-study from $799, eLearning from $899. Includes 1-on-1 session with Shenoy Sandeep.
EU AI Act Obligations Mapped to ISO 42001 Controls
EU AI Act Obligation (High-Risk)
ISO 42001 Control
Coverage
Gap / Extension Needed
Risk Assessment (Art. 9)
A.5 Risk Assessment & Treatment
Fully covered
Must explicitly identify Annex III use cases & risks
Data Quality (Arts. 10–13)
A.8 Data Governance
Largely covered
Add detail on training/validation/test set governance & bias measurement
Human Oversight (Art. 14)
A.9 Human Oversight
Fully covered
None — ISO 42001 A.9 directly satisfies Act requirements
Robustness & Accuracy (Art. 15)
A.11 Monitoring & Analysis
Largely covered
Add performance benchmarking & drift detection
Transparency to Users (Art. 16)
Governance — not in Annex A
Not covered
Build separate transparency protocol; inform users that AI is being used
Post-Market Surveillance (Art. 22)
A.11 & A.12 Monitoring & Incident Mgmt
Partially covered
Add incident reporting to EU AI Office; track serious incidents separately
Conformity Assessment (Art. 23 & Annex VI)
Documented controls — not direct requirement
Not directly covered
Add Act-specific conformity workflow (internal assessment or notified body)
Frequently Asked Questions
Is ISO 42001 certification sufficient to comply with the EU AI Act?+
No — ISO 42001 satisfies 40–50% of the Act's obligations (risk management, documentation, human oversight, data governance). However, it does not address the Act's prohibited systems, does not mandate conformity assessment, does not require incident reporting to the EU AI Office, and does not include transparency disclosures to end-users. Both frameworks must be implemented in parallel: ISO 42001 as the governance foundation and Act-specific workflows for conformity, incident reporting, and prohibited system management.
What happens if we use a prohibited AI system?+
You must cease use immediately. The EU AI Act explicitly bans 9 categories of AI systems (social scoring, real-time facial recognition in public spaces, emotion recognition in schools/workplaces, etc.). No risk controls, ISO certification, or fines will excuse continued use. The Act allows no exceptions and provides no transition period for existing systems. If your organisation uses any prohibited system, you must remove it before complying with the Act.
Can we use an ISO 42001 audit as evidence for EU AI Act compliance?+
Yes, with qualifications. An ISO 42001 certificate and your Statement of Applicability demonstrate that you have implemented risk management, documentation, human oversight, and data governance — all expected by the Act. However, EU regulators will also ask for Act-specific evidence: a conformity assessment report, a technical file for high-risk systems, a post-market surveillance plan, incident tracking, and proof that no prohibited systems are in use. The ISO audit is supporting evidence, not a substitute for Act-specific documentation.
What does conformity assessment mean, and how is it different from ISO certification?+
Conformity assessment (EU AI Act Article 23 & Annex VI) is a process to verify that a high-risk AI system meets the Act's requirements — data quality, robustness, human oversight, transparency. It can be done two ways: (1) internal assessment — your organisation conducts it internally and documents it, or (2) notified body assessment — an EU-accredited third party conducts it and issues a conformity certificate. ISO 42001 certification is an audit of your management system by an accredited certification body — it certifies your governance framework exists, not that individual AI systems meet Act requirements. Conformity assessment is system-specific; ISO certification is organisation-wide.
What is the timeline and cost to become EU AI Act compliant?+
For a small organisation (1–10 AI systems): ISO 42001 implementation costs €15,000–€50,000 and takes 8–16 weeks. Act-specific work (audit, conformity assessment, post-market surveillance setup) adds €10,000–€30,000 and 4–8 weeks. For a large enterprise (50+ systems): implementation costs €100,000–€400,000+ and takes 16–32 weeks. Notified body assessment for each system adds €20,000–€100,000. Total investment varies widely — contact reconn for a custom roadmap based on your portfolio size.
What fines apply for non-compliance, and does ISO 42001 certification protect us?+
The EU AI Act carries significant penalties: up to €30 million or 6% of annual turnover for core obligation violations (prohibited systems, undocumented high-risk systems); up to €15 million or 3% for lower-risk violations; and up to €7.5 million or 1.5% for transparency breaches. ISO 42001 certification demonstrates good faith governance and can reduce fines by showing compliance effort, but it is not a legal shield. If you use a prohibited system or knowingly deploy undocumented high-risk systems, no certification will excuse the violation.
Does reconn offer live training on EU AI Act alignment with ISO 42001?+
Yes. reconn conducts live online training programmes in small cohorts (3–5 attendees) and 1-on-1 mentorship programmes where we cover the ISO 42001 Lead Implementer certification while directly mapping controls to EU AI Act obligations. This is unique: most training providers teach the standard and the Act in isolation. We align them so you understand exactly how ISO 42001 controls satisfy Act requirements and where gaps remain. Programmes are conducted live online by Shenoy Sandeep, typically in the evenings (18:00–20:00 Central European Time) to accommodate working professionals. This means you learn from someone with 20+ years of cybersecurity and AI governance experience who can answer questions specific to your organisation's AI portfolio. Contact us at hello@reconn.io or WhatsApp +971-585-726-270 to discuss a custom programme.
Are there exemptions for small organisations or startups?+
No size exemptions exist. However, the Act acknowledges different capacities: SMEs and startups can apply for regulatory sandboxes (Article 53) to test AI systems under regulatory oversight without immediately complying with all requirements. Additionally, high-risk system requirements scale with complexity — a simple chatbot has fewer documentation requirements than a hiring system. For SMEs with 1–10 AI systems, the 5-phase roadmap described in this article can be completed in 8–12 weeks with modest investment. The European Commission has published SME guidance on the Act, available on the AI Act Service Desk.
Advanced Auditing & Act Compliance
ISO 42001 Lead Auditor Course
Deepen your audit skills and learn how to assess ISO 42001 controls for EU AI Act compliance. 4-day course covers control mapping, high-risk system assessment, conformity readiness, and audit techniques specific to AI governance. Self-study from $899, eLearning from $999. Includes 1-on-1 with Shenoy Sandeep.
End-to-end governance. reconn delivers a complete implementation: AI system audit, ISO 42001 deployment with Act-aware controls, conformity assessment setup, post-market surveillance planning, and competency training. Tailored to your organisation size and AI portfolio. Direct engagement with Shenoy Sandeep.
Shenoy Sandeep is the Founder of reconn, an AI-first cybersecurity firm based in Dubai, UAE — assisting startups and enterprises scale across the Middle East and African region. With 20+ years across offensive security, threat intelligence, and enterprise risk, and over 10 years in Enterprise AI, AI governance, and Business Continuity, he brings a practical, execution-driven approach to AI governance and information security.
He is a PECB-certified trainer and one of the world's early PECB-certified AI professionals, specialising in ISO/IEC 27001, ISO/IEC 42001, ISO 22301, and ISO 9001.