In the U.S., the Food & Drug Association (FDA) has been proactive in establishing guidelines for artificial intelligence (AI) and machine learning (ML) in medical devices. In January 2021, the FDA released its “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan,” the first federal framework providing a clear regulatory approach for AI/ML-based SaMDs. Shortly after, the FDA released a joint paper in October 2021, detailing guiding principles to promote the development of safe and effective medical devices while incorporating AI/ML. Then, in June 2024, the FDA announced further guidance on “Transparency for Machine Learning-Enabled Medical Devices (MLMDs): Guiding Principles,” designed to provide additional guidance for transparency in MLMDs.
While the FDA’s full regulations for AI/ML are in development, key principles are emerging. A primary focus is ensuring that AI technologies provide real value to patients, distinguishing between hype and genuine advancements. These regulations stress AI’s role in supporting safe and effective patient care while upholding the "human-in-the-loop" principle, ensuring human oversight in AI applications.
For organizations working with AI/ML in medical devices, it’s essential to recognize that AI regulations complement existing medical device legislation. In addition to complying with traditional medical device regulations, quality assurance and regulatory affairs professionals must now navigate the additional layer of AI/ML-related guidelines, making regulatory oversight even more critical in the development of advanced medical device technologies.
The FDA’s approach to regulating AI in medical devices is heavily influenced by existing frameworks such as the 510(k) and Premarket Approval (PMA) pathways. The 510(k) process allows for faster approval for devices that are substantially similar to predicate devices. In contrast, the PMA pathway is reserved for high-risk or groundbreaking devices, particularly those introducing new technology and requiring more extensive testing and clinical data to ensure safety and efficacy. For AI-driven technologies, particularly those introducing novel concepts, the PMA process involves providing comprehensive evidence to build confidence in their performance and safety.
For AI-powered medical software, the decision to use the 510(k) or PMA pathway depends largely on the product’s level of innovation and associated risks. AI systems, such as those designed for analyzing angiograms or diagnosing cardiac conditions, must demonstrate clinical efficacy and, even when these systems achieve high accuracy of diagnosis, regulatory scrutiny often focuses on the potential consequences of errors, such as misdiagnosis. Navigating the right regulatory pathway for AI devices can be challenging, adding complexity to the product approval process and increasing uncertainty for developers.
One interesting development is the FDA's exploration of real-world evidence (RWE) in regulatory submissions. In December 2023, the FDA published the Draft: Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. This proposal suggested draft guidelines clarifying how to evaluate if real-world data (RWD) is “of sufficient quality” for generating RWE. These guidelines address the wealth of data available to researchers and developers and allow regulatory bodies to consider its value. Regulations supporting RWE are directly tied to AI, as these systems and algorithms inherently rely on RWD to function. The FDA’s approach in this situation is highly relevant to the development of standards related to AI use in medical devices.
The FDA’s drafted guidelines also align with the principles outlined in the United States’ Executive Order, released in October 2023: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This executive order details the importance of reducing bias by ensuring the use of diverse data, clear labeling and risk assessments. Aligning regulations to consider these factors will allow technologies to perform well while maintaining safety and equity.
Internationally, various countries are also introducing regulations and guidelines addressing the use of AI in medical devices. For example, the implementation of the EU AI Act presents the challenge of reconciling those guidelines with European medical device regulations. These regulations also call for increased collaboration between different countries. In recognition of that challenge, the FDA, the UK’s Medicines and Healthcare products Regulatory Agency and Health Canada released updated guiding principles on transparency for MLMDs: Transparency for Machine Learning-Enabled Medical Devices. Specifically, these guidelines call attention to the transparency and usability of MLMDs while practicing “human-centered design.” To improve harmonization and support predictability of expectations related to AI, international agencies will need to continue this level of collaboration.
In the future, AI will play a significant role in powering the next generation of medical devices. This means the ability to ask the right questions during the design and quality assurance processes is critical. Ensuring that medical devices are safe and effective and, in the case of AI devices, developed in a way that minimizes the impact of data and human bias is paramount. This involves using diverse datasets that reflect a broad range of demographics, allowing for demographic inclusivity and ensuring that devices operate in a transparent and verifiable manner. A key issue to address with this technology is whether self-learning, continuously evolving algorithms could be trusted to diagnose patients. The challenge lies in validating such systems, which is why emerging legislation focuses on balancing the capabilities and benefits of AI with the associated risks.