Medical device manufacturers should use a risk pyramid to determine whether their products are classified as high-risk and require conformity assessments by notified bodies under the EU’s Artificial Intelligence Act, according to Sebastian Fischer, regulatory strategy principal at TÜV SÜD Product Service GmbH.
Fischer and other experts discussed what manufacturers must do to comply with upcoming requirements under the act at DIA Europe 2025 on Wednesday.
The AI Act came into force on 1 August 2024. It will apply to products with high-risk applications on 2 August 2026, which includes medical devices placed in the Class IIa category or higher. It will then apply more broadly on 2 August 2027, with full implementation anticipated by 31 December 2030.
Fischer stated that the EU AI Act introduces a risk-based system for classifying AI applications. This system ranges from minimal-risk devices at the bottom of the pyramid to unacceptable-risk systems at the top. Fischer said that any system that uses harmful AI-based manipulation or deception, or a system that uses social scoring is an unacceptable risk and is prohibited.
Medical devices are classified as high risk if they use AI-embedded software used to diagnose or detect abnormalities and would require assessment by a notified body.
Devices with limited-risk include those that are not used to diagnose or detect abnormalities, which are subject to specific transparency obligations under the AI Act. At the bottom of the pyramid are those minimal-risk devices, and these are unregulated. This category is not explicitly mentioned in the AI regulations.
Fischer noted that while AI regulation may be new, the use of AI in medical devices is not.
Fischer recommended that manufacturers review a position paper from Team-NB and the German Notified Bodies Alliance for Medical Devices entitled Questionnaire: Artificial Intelligence in Medical Devices.
Thorsten Stumpf, project lead for regulatory affairs at Metecon GmbH elaborated on some of the requirements for providers for high-risk AI systems in addition to needing to undergo a conformity assessment before placing the product on the market or putting it into service.
Manufacturers must adhere to labeling requirements, which includes providing identification on the packaging. This should include the manufacturer’s name, registered trade name, trademarks, contact address, and CE marking.
In addition, risk management teams should be in place to assess the AI system. Lastly, manufacturers must also have technical documentation which includes keeping record-keeping logs over the lifetime of the AI system.
Fischer said that implementing the AI regulations is mostly an administrative exercise in getting the documentation in order.
Rajarshi Banerjee, CEO of Perspectum Ltd, emphasized the need to prepare for a new wave of applications utilizing advanced AI tools in medical devices. Many of these devices herald a future focused on the use of non-invasive technology to diagnose diseases.
His company has received FDA clearance and EU CE marking for its LiverMultiScan software application, which is a non-invasive test used to detect liver disease. He explained how the use of algorithms in the technology can be used to replace liver biopsies, which he said are often inaccurate.
The technology can determine whether the whole liver or only a part or it is diseased through an MRI scan. Scans from the MRI are sent to a laboratory that employs a proprietary algorithm. The result is a summary of images that detail the health of the liver. RAPS.org