Understanding the EU AI Act and ISO 42001- What organisations need to know about compatibility and implementation

The EU AI Act which came into force on the 1st August 2024 introduced the first comprehensive, harmonised regulatory framework for managing AI systems ethically and responsibly. Before the Act, the closest robust guidelines in existence was ISO 42001, which has a similar overarching goal.

If your organisation has already implemented ISO 42001, you might have a head start in achieving EU AI Act compliance. In this article, we explain why this is the case by covering:

  • The purpose and scope of the EU AI Act and ISO 42001
  • The complementary and harmonious relationship between the two frameworks
  • Steps and strategies to approach compliance with both standards

EU AI Act and ISO 42001: Similarities and differences

The EU AI Act and ISO 42001 aim to ensure safe and responsible development, implementation, and use of AI systems. Still, they approach this goal differently.

The EU AI Act is a mandatory regulation that applies to all EU-based organisations and those that provide services in the EU. Meanwhile, ISO 42001 is an international, voluntary standard with recommended best practices for building a comprehensive AI management system (AIMS).

Another considerable difference is the certification type:

  • ISO 42001 is a certifiable standard, and an obtained certificate is valid for three years
  • The EU AI Act requires only self-attestation, with re-attestation needed only if significant changes are made to the AI system

Even though ISO 42001 is a certifiable standard, this certification is voluntary and organisations are not mandated to achieve it. By contrast, the EU AI Act carries considerable legal weight, so non-compliance can lead to substantial fines and penalties.

Despite these differences, the shared goal of the EU AI Act and ISO 42001 results in notable overlaps between these frameworks.

The relationship between the EU AI Act and ISO 42001

The EU AI Act and ISO 42001 have around 40%–50% overlap in high-level requirements. Both frameworks cover several important aspects of responsible AI system development and implementation, such as:

Data governance: Article 10 of the EU AI Act outlines various data governance requirements regarding data categorisation and bias detection. Similarly, ISO 42001 also focuses on bias detection and mitigation and calls for clear roles to be defined in charge of AIMS oversight, which should encompass effective data governance.

Risk management: The main pillar of the EU AI Act is the classification of risks into four categories (unacceptable, high, limited, and minimal) and the different treatment of AI systems depending on their risk level. ISO 42001 offers a clear framework for effective risk assessment, which helps categorise different AI system risks and manage them accordingly.

Human oversight: As per Article 14 of the EU AI Act, AI systems should be developed to enable ongoing human oversight, with specific measures corresponding with the risk level. ISO 42001 aligns with this requirement, mainly by recommending the detailed documentation of AI processes for increased transparency and easier oversight.

Ethical implications: Both the EU AI Act and ISO 42001 emphasise the importance of ethical use of AI systems, which includes fairness in decision-making, bias mitigation, and other measures that prevent harmful effects of AI implementation.

High-risk AI systems: ISO 42001 provides practical guidelines for detecting and discontinuing AI systems that breach EU AI Act prohibitions, including untargeted facial recognition or biased decision-making algorithms.

These overlaps allow your team to reuse the existing controls you might have put into place while pursuing ISO 42001 certification to simplify compliance with the EU AI Act.

How to approach compliance with ISO 42001 and the EU AI Act

If you’ve already obtained an ISO 42001 certificate, the first step toward EU AI Act compliance is to cross-reference your existing controls with the Act’s requirements. You can then identify all compliance gaps that require remediation to ensure adherence to the Act.

If you haven’t achieved ISO 42001 compliance, you can choose whether to implement it first or focus on the EU AI Act directly. Since the Act is comprehensive and mandatory, prioritising it might be the more practical option.

This doesn’t mean you should skip ISO 42001 compliance altogether, becoming certified lets you build a robust AIMS that helps future-proof your AI-related operations. It can also give you a notable competitive advantage because it shows commitment to responsible AI use beyond the mandatory regulations. Keeping this in mind, combining ISO 42001 certification with EU AI Act compliance is the most comprehensive way to develop and implement AI responsibly. To help, we’ll go over the high-level processes of complying with both standards.

How to obtain an ISO 42001 certificate

To become ISO 42001-certified, it is advised organisations undertake the following steps:

Understand the principles and requirements: ISO 42001 has 10 clauses, six of which outline the specific requirements you must meet to get certified. It also includes four annexes with detailed prescriptive guidance you can use to implement the necessary controls.

Conduct a gap analysis: Analyse your current or prospective AI system to see how it aligns with ISO 42001 requirements. Some of the key aspects you’ll need to review include roles and responsibilities, data and resources used to build the system, and the impact of AI systems on stakeholders and your broader environment. Use the findings to develop a strategy for closing the gaps and achieving compliance.

Build your AIMS: Go through the ISO requirements to develop the policies, procedures, and practices that will be encompassed by your AIMS to ensure ongoing compliance with the prescribed standards.

Document your processes: Document the implementation of the relevant controls to ensure transparency and clear oversight of your AI processes.

Continuously monitor and improve: Continuously monitor and review your AIMS to identify opportunities for the improvement of its suitability, adequacy, and effectiveness.

How to achieve EU AI Act compliance

While the specific steps to achieving EU AI Act compliance depend on the current state of your AI systems, the general process consists of the following steps:

Assess the Act’s impact on your organisation: Use an EU AI Act Compliance Checker or specialist GRC partner to precisely determine how the Act affects your organisation.

Review and document your AI practices: Perform a comprehensive assessment of your current AI systems, documenting the related policies and practices to make the relevant information readily available to auditing bodies.

Perform a conformity assessment: If your AI system is classified as high-risk, conduct a conformity assessment to bridge any compliance gaps related to transparency, risk management, record-keeping, and other relevant requirements.

Submit your EU Declaration of Conformity: After ensuring EU AI Act compliance, submit an EU Declaration of Conformity in physical or electronic form.

Conduct post-market monitoring and reassessment: Develop a system for continuously monitoring and reporting your AI system’s performance and adherence to the EU AI Actace to recover your system should the worst happen might be the key to keeping your organisation functional during a cyber security incident – without them your organisation may be unable to fully recover.

Awards and Accreditations

blue light commercial logo

Contact Us

Cyberfort Ltd
Venture West,
Greenham Business Park, Thatcham,
Berkshire,
RG19 6HX

+44 (0)1304 814800

[email protected]


Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.