MSECB

Home → News & Resources → Experts Talk

ISO/IEC 42001 and the EU AI Act: A Compliance Guide

Introduction to ISO/IEC 42001 and the EU AI Act

In the rapidly evolving landscape of Artificial Intelligence (AI), regulatory compliance has become a critical concern for companies that design, implement, train, offer, and use AI models in their core business processes. The EuropeanUnion’s AIAct represents a significant step towards ensuring that AI technologies are developed and deployed responsibly, focusing on safety, transparency, and accountability. However, navigating the complexities of this legislation can be challenging for organizations that employ Artificial Intelligence (AI) models to conduct their core business processes. 

This article explores the interrelation between the EUAIAct and ISO/IEC42001, illustrating how the latter can serve as a valuable tool for achieving regulatory compliance with the first. We will, however, not delve into the key provisions and core structures of both but object to highlighting how the standard’s framework supports the legislative goals. Whether you are an AI developer, a business leader, or a compliance officer, understanding this synergy is essential for leveraging AI technologies while adhering to legal and ethical standards. 

Understanding the EU Artificial Intelligence Act

The AI Act is a European regulation on Artificial Intelligence (AI) – the first comprehensive regulation of AI by a major regulator anywhere. The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned in the EU. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated. 

AI applications on the Internet may influence what information you see online by predicting what content engages you, capturing and analyzing data from faces to enforce laws or personalize advertisements, and are used to diagnose and treat cancer. In other words, AI affects many parts of your life and is present almost everywhere today. Like the General Data Protection Regulation (GDPR), the AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you are. 

Overview of ISO/IEC 42001:2023

ISO/IEC 42001 is an international standard that provides a framework for AI Management Systems (AIMS). This standard is designed to help organizations streamline their compliance efforts with the EU AI Act, it specifies requirements for establishing, implementing, maintaining, and continually improving AIMS within organizations and it is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. By adopting ISO/IEC 42001, companies can establish robust processes and controls that align with the regulatory requirements, thereby simplifying the path to compliance. 

It is the world’s first AI Management System (AIMS) standard, providing valuable requirements for this rapidly changing technology field. It addresses the unique challenges AI poses, such as ethical considerations, transparency, and continuous learning. For organizations, it sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance. 

Interrelation between the Act and the Norm

The interrelation between the EUAIAct and the international standard ISO/IEC42001 lies in their complementary structures, where the legislative requirements of the AIAct are supported by the detailed requirements provided by ISO/IEC42001. This synergy helps organizations achieve and demonstrate compliance with the AIAct through a structured approach to managing AI systems. Consequently, the AIAct is structured to establish a legal framework for the development, deployment, and use of AI systems within the European Union. 

Whereas the ISO/IEC42001 provides a framework for AI management systems, focusing on the lifecycle of AI systems from its design and development throughout the training and test towards deployment and usage. Many of its requirements originate from the respective articles and clauses of the AI Act. However, one should have a good legal education to rapidly grasp from reading the Act. A technology expert, who is potentially familiar with integrated Management Systems (iMS), like ISO 9001 or ISO/IEC 27001, may be more efficient in reading such requirements from a structured iMS Norm.

Table1 below summarizes a selection of “overlaps” between the AI Act Requirements and ISO/IEC 42001 controls and illustrates how the latter directs and supportively accompanies the organization’s AIofficer toward the objective of compliance with the first. 

Key Overlaps Between EU AI Act Requirements and ISO/IEC 42001

By aligning their AI management practices with ISO/IEC42001, organizations can create a robust framework that meets the requirements of the EUAIAct and enhances the overall quality and reliability of their AI systems. This alignment simplifies the compliance process and provides clear evidence of adherence to regulatory standards. 

The Pharmaceutical Cabinet Analogy for AI Compliance

Taking a pharmaceutical cabinet (cf. Figure 1) with a lot of drawers as an analogy. ISO/IEC 42001 can be seen to form a systematically ordered location for containing the AI-related artifacts required by the EU AI Act to be present. Drawing that analogy between the AI Act requirements and ISO/IEC 42001 with a pharmaceutical cabinet can help illustrate this tandem-like concept more clearly. 

When the AI Act sets forth requirements for AI systems to ensure they are safe, transparent, and accountable, these requirements are systematically organized according to the structure proposed by ISO/IEC 42001, which provides a framework for managing AI systems effectively. Imagine a pharmaceutical cabinet in a hospital designed to store various medications in a safe, organized, and accessible manner. Each drawer in the cabinet is labeled and contains specific types of medications, ensuring that healthcare professionals can quickly find and retrieve the necessary drugs. Similarly, the AI officer in an organization would be able to swiftly identify and retrieve the required AI artifacts by organizing these based on the structure of the integrated management system outlined in ISO/IEC 42001. It also allows one to have a full and detailed overview of what artifact is well done, requires improvement, is missing, or might be not applicable. 

In summary, just as a pharmaceutical cabinet with systematically organized drawers ensures the safe, transparent, and efficient management of medications, the management of AI compliance, structured according to ISO/IEC 42001, ensures that AI systems can be managed in a safe, transparent and accountable manner, as required by the EU AI Act. 

Figure1: Pharmaceutical Cabinet illustrating the analogy of the ISO42001 to systemize AI artifacts required by the AIAct in a way that these can be effectively managed in order to illustrate Compliance. (Source: Wikipedia4) 

However, it should be noted that likewise, an empty or improperly filled cabinet will not be sufficient for the proper operation of a pharmacy or a hospital, the AI Management System populated with improperly compiled AI artifacts will not lead to compliance with the EU AI Act. Consequently, in that tandem, both parts are strongly related. 

Conclusion: The Synergy of ISO/IEC 42001 and the EU AI Act

ISO/IEC 42001 provides a structured framework that aids companies in achieving compliance with the EU AI Act. By aligning with ISO/IEC 42001, organizations can effectively manage the lifecycle of AI systems, ensuring adherence to regulatory requirements. This synergetic tandem simplifies the compliance process for companies involved in designing, implementing, training, offering, and using AI models, thereby fostering responsible and ethical management of AI systems employed in the organization’s core business processes, as required by the EU AI Act. 

The AI Act, with its robust regulatory framework, aims to mitigate risks associated with AI while encouraging trustworthy AI practices. Complementing this, ISO/IEC 42001 will provide a standardized approach to AI Management Systems, ensuring consistency, quality, and safety across AI applications. Together, these frameworks will drive the future development of AI technology by (i) ensuring ethical compliance and trustworthiness, (ii) promoting innovation and competitiveness, (iii) enhancing global collaboration and interoperability, (iv) strengthening risk management and security and (v) empowering stakeholders through education and awareness. The synergy between the EU AI Act and ISO/IEC 42001 will shape the future trajectory of AI development and ensure that it is aligned with societal values, ethical principles, and global standards. 

About the Author

""

Roman Krepki

Roman Krepki graduated in Computer Science in 2000 from the Technical University of Berlin in AI Algorithm Development and Neural Networks. He was lecturing several practical AI courses and worked as a research associate on Neural Algorithmics. In the course of design and development of a Brain-Computer Interface (BCI) at Fraunhofer Institute FIRST in 20002004 the term „Brain-Gaming“ has been shaped in his Doctoral Thesis with a focus on the evaluation of different computer games control strategies, VR/AR-based scenarios, and neurophysiological paradigms. Joining Accenture in 2005 as a Technology Consultant, Roman built up his experience in the field of Cyber-Security, Risk Management, BCM, and IT-Audit and was an Executive Manager for the BCM practice in ASG. From 2011 on, he accepted the role of the Information Security and Data Protection Officer at Robert Bosch Automotive Electronics division worldwide. As of today, he is associated with Forvis Mazars, the world’s 8th largest Audit and Consulting company located in Stuttgart, and leads its Cyber-Security Practice in Germany, specializing inter alia on customers from the Healthcare industry sector. Recent advancements in AI Management Legislation awoke Roman’s interest and required the employment of his expertise in integrated Management Systems (iMS), like ISO/IEC 42001 interrelated to ISO/IEC 27001 and ISO 22301. He holds several Cybersecurity, AI, and HealthCare relevant certifications. 

Other articles