On October 14, 2021, FDA’s Digital Health Center of Excellence (DHCoE) held a public workshop on the transparency of artificial intelligence/machine learning-enabled medical devices. The workshop followed the recently published list of nearly 350 AI/ML-enabled medical devices that have received regulatory approval since 1997. The workshop was aimed at moving forward the objectives of FDA’s DHCoE to “empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation.” The DHCoE was established in 2020 within FDA’s Center for Devices and Radiological Health (CDRH) under the leadership of Bakul Patel.
The October workshop focused on “transparency” as the overarching theme and guide, driving the development of a suitable regulatory process as well as wide adoption of such devices by healthcare professionals and/or patients. The term transparency in this context covered critical considerations underlying risk management of AI/ML-enabled medical devices and other factors impacting future adoption, including (but not limited to):
The workshop incorporated perspectives from the agency itself, industry, healthcare providers/systems and patients, and attracted nearly 4,000 registrants. Diverse opinions and presentations highlighted the complexity and challenge associated with risk management and the development of a suitable approval process for devices incorporating self-learning algorithms.
The summary below highlights select high-level findings from the analysis of the published list of currently approved AI/ML-enabled medical devices, and the viewpoints shared as part of the FDA workshop, along with select implications for medical devices and diagnostic manufacturers. (For a more thorough understanding of the topics of discussion and assessment of implications, please access the FDA recording and presentations directly.)
The recently published list of nearly 350 AI/ML-enabled medical devices approved by FDA provides evidence that imaging/diagnostic technologies lead the integration of algorithms to drive clinical decision-making in the healthcare space. As of mid-2021, 70 percent of listed approvals to date are in the area of radiology, followed by cardiology with 12 percent, and 3 percent each for hematology and neurology applications (see figure).
Source: Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices | FDA
Not surprising is also the acceleration of approvals in the most recent past. Following <30 approvals between 1997 and 2015, submissions and approvals have accelerated, reaching approximately 100 individual approvals in 2020 alone. This number is expected to continue to increase in parallel with further development of AI/ML-based applications and opportunities across a broader set of disease areas and device categories. From an industry perspective, the data show the broader GE and Siemens organizations leading the way, with 22 and 18 approvals to date, respectively.
FDA recognizes “the need for careful oversight to ensure the benefits of these advanced technologies outweigh the risks to patients,” while collaborating with stakeholders and building “partnerships to accelerate digital health advances.” FDA references four key considerations underlying these advancements: usability, trust, equity and accountability — themes that were raised consistently throughout the workshop by various stakeholders.
Usability: Recommendations were made that guidelines and approval processes need to be adjusted based on the intended use case of the enabled device. Reimbursement for associated data analysis of such devices also remains a key factor influencing usability with HCPs, along with out-of-pocket cost considerations for patients.
Trust: Healthcare professionals and patients need to “trust” the device, requiring insights and sharing of the “right information” with the “right stakeholder”:
Regulatory considerations: the logic behind the algorithms’ decision-making, understanding the data used to train and test, metrics behind the development, among others.
Provider considerations: validation of performance (safety and efficacy data), explainable logic, among others.
Patient considerations: transparency about adverse effects, benefits, minimized complexity but explainable logic, among others.
Equity: Transparency was promoted as enabling health equity, allowing stakeholders (patients and HCPs) to make informed decisions. The discussion around equity also raised the topic of bias based on homogenous patient data used to train algorithms and a lack of diversity inclusion (gender, race, age, etc.). Access to trial participation for low-income and minority patient populations was highlighted as a critical need for future success and the elimination of bias.
Accountability: Concerns remain regarding access, accountability, and education on usability and logic of the device to ensure proper HCP-patient communication and engagement with AI/ML-enabled medical devices. Ongoing monitoring of performance post-launch is expected to become a more stringent requirement going forward.
All stakeholders involved in the process of striving to improve patient care via the development of AI/ML-enabled medical devices can benefit from reviewing the details of the FDA recording and presentations. Insights and findings were shared across a broad spectrum of initial development and application attempts, patient responses, perceptions, and other considerations. Select high-level industry recommendations stemming from the discussion can be summarized as follows:
Manufacturers will benefit from thoroughly dissecting, understanding and solving the “transparency” needs of each stakeholder (FDA, payor, provider, patient) depending on the device, application and intended use case. Patients are increasingly engaging with their own healthcare in new ways, therefore, understanding and addressing patient concerns regarding the use of AI/ML-enabled medical devices will be essential in driving future adoption. It’s critical to listen to the powerful voices of patients for future innovation and adoption.
To access real world evidence and data, develop an RWE-based strategy, including post-launch performance monitoring, development and applications of AI/ML-driven data analysis and patient trial execution, contact us at IQVIA MedTech.