The impact of the EU’s AI Act on the medical device sector


The use of artificial intelligence (AI) in the field of medical devices has increased rapidly in recent years and become an integral part of the development of MedTech. AI systems are able to analyse large quantities of data and recognise patterns, improving diagnostic technology and patient treatment. Nevertheless, the use of AI also raises legal and ethical questions. For this reason, the European Union has created the first government regulation of artificial intelligence (the AI Act), which has significant implications for the medical device industry.

The regulation is not limited to medical devices, but essentially addresses any products which fall under the definition of an AI system. The overarching aim of the regulation is to protect the safety, environmental sustainability and fundamental rights of individuals. Based on this guiding principle, the AI Act differentiates its scope of application on the basis of four risk groups. The higher the outgoing risk, the higher the group.

The EU categorises medical devices in the third group for ‘high risk’. In accordance with the established Medical Device Regulation (MDR), this categorisation includes all MDR products in risk class IIa or higher, as well as some medical devices in MDR risk class I. The Act stipulates the ‘high-risk group’ as the highest permissible AI group. Only the regulations on banned AI systems are higher up. Accordingly, the regulations adopted are correspondingly stringent. Providers will have to ensure compliance with strict standards in terms of risk management, data quality, transparency, human oversight and robustness. For medical devices, this means that manufacturers will have to comply with further regulations in addition to the requirements of the Medical Device Regulation (MDR) and, where applicable, the In-Vitro Diagnostic Regulation (IVDR). This requirement begins as early as the development process and continues through the authorisation procedure to post-market safety.

At first glance, the regulations of the AI Act appear to coincide with the catalogue of obligations of the MDR, but in some cases the content differs considerably. In addition to the familiar catalogue of obligations, manufacturers of medical devices must ensure that their products are robust and protected against cyberattacks from outside in the area of the AI-based system and that any errors which do occur can be detected immediately, controlled by humans and traced within the system.

Manufacturers of high-risk AI systems must also subject their products to certain conformity assessment procedures in accordance with the AI Act. Extensive conformity assessment procedures by notified bodies are common in the field of medical technology. Therefore, even before the AI Act was passed, notified bodies were exposed to a great deal of pressure. In particular, the MDR already provides for a considerable control effort, which is now being supplemented by further administrative work. Medical devices manufacturers must therefore ensure that they contact a notified body with sufficient accreditation when authorising a product equipped with AI. In future, notified bodies will have to provide expertise for AI systems that is not currently available. Significant delays and cost increases are therefore to be expected due to the even more extensive conformity assessments. In return, the AI Act should guarantee the slogan ‘Made in Europe’ a unique standing for AI technology through this European Union-wide harmonisation.

To ensure comprehensive error control, the EU imposes technical documentation and logging obligations on manufacturers. The implementation of a quality management system is also required. Although such a system is already prescribed for manufacturers of medical devices by the MDR, it needs to be adapted when using artificial intelligence as a result of the AI Act. This applies in particular to new comprehensive regulations in the area of data processes carried out in advance and for the purposes of placing on the market. Risk management also needs to be reorganised due to the new objectives. While the MDR has so far focussed on safety-related risks, the AI Act primarily guarantees more extensive protection for the fundamental rights of individuals.

Especially small and medium-sized companies in the MedTech sector are now faced with further control and quality assurance tasks in addition to the MDR regulations. The standards to be met will not be lowered in order to guarantee comprehensive protection for high-risk AI systems. The EU Commission therefore envisages other relief in the manufacturing and authorisation procedure for these systems. Digital and physical facilities are to be established in the EU Member States where companies can develop and test technologies under official supervision. Furthermore, more support is to be guaranteed through low fees for conformity assessments and specialised training.

The first provisions of the AI Act will come into force in stages from six months after promulgation, before the final provisions come into force 30 months later. It remains to be seen to what extent the EU will publish further guidelines on implementation and to what extent authorisation bodies will react to the effort involved. In particular, further regulations on liability issues relating to the use of AI have already been announced. What is certain, however, is that AI systems will now be comprehensively regulated for the first time. However, the impact on the market is uncertain. Particularly in the field of medical technology, it could promote widespread innovation and increase patient confidence in AI-based MedTech. In any case, MedTech companies must be prepared for the new compliance requirements.

Dr. Christoph von Burgsdorff, LL.M. (Essex)

Dr. Christoph von Burgsdorff, LL.M. (Essex)
+49 40 18067 12179