News

FDA’s AI/ML-Based Software as a Medical Device Action Plan

With this article, we’d like to offer a summary of the AI/ML-Based Software as a Medical Device Action Plan recently published by the Food and Drug Administration (FDA or Agency).

Introduction

Artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. One of the greatest benefits of AI/ML in software resides in its ability to learn from real-world use and experience, and its capability to improve its performance.
FDA’s vision is that, with appropriately tailored total product lifecycle-based regulatory oversight, AI/ML-based Software as a Medical Device (SaMD) will deliver safe and effective software functionality that improves the quality of care that patients receive.
In April of 2019, FDA published the “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback.” This paper described the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications. A framework that would enable FDA to provide a reasonable assurance of safety and effectiveness while embracing the iterative improvement power of artificial intelligence and machine learning-based software as a medical device.
In response to stakeholder feedback on the discussion paper, in light of the public health need to facilitate innovation through AI/ML-based medical software while providing appropriate oversight for it, the Agency has most recently published a five-part Action Plan.

1. Update the framework for AI/ML-based SaMD

The discussion paper proposed a framework for modifications to AI/ML-based SaMD that relies on the principle of a “Predetermined Change Control Plan”. The SaMD Pre-Specifications (SPS) describe "what" aspects the manufacturer intends to change through learning, and the Algorithm Change Protocol (ACP) explains "how" the algorithm will learn and change while remaining safe and effective.
Based on the strong community interest in the Predetermined Change Control Plan, this 2021 the Agency intends to issue draft guidance for public comment in this area. This draft guidance will include a proposal of what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML SaMD algorithms. The Agency will leverage docket input received on the AI/ML-based SaMD discussion paper as well as recent submission experience.

2. Encourage harmonization of Good Machine Learning Practice development

The discussion paper used the term Good Machine Learning Practice (GMLP), to describe a set of AI/ML best practices (e.g., data management, feature extraction, training, interpretability, evaluation, and documentation) that is important to develop and adopt, not only for guiding the industry and product development but also for facilitating oversight of these complex products.
Given the need for GMLP, the Agency has been an active participant in numerous efforts related to their development, including standardization efforts and collaborative communities.
As part of the Action Plan, FDA is committing to deepening these efforts, which will be pursued in close collaboration with the Agency’s Medical Device Cybersecurity Program, keeping with FDA’s longstanding commitment to a robust approach to cybersecurity for medical devices.

3. Hold a public workshop on how device labeling supports transparency to users and enhances trust in AI/ML-based devices

The Agency acknowledges that AI/ML-based devices have unique considerations that necessitate a proactive patient-centered approach to their development and utilization that takes into account issues including usability, equity, trust, and accountability. Promoting transparency is a key aspect of a patient-centered approach, especially for AI/ML-based medical devices, which may learn and change over time, and which may incorporate algorithms exhibiting a degree of opacity.
Numerous stakeholders have expressed the unique challenges of labeling for AI/ML-based devices and the need for manufacturers to clearly describe the data that were used to train the algorithm, the relevance of its inputs, the logic it employs (when possible), the role intended to be served by its output, and the evidence of the device’s performance.
Building upon the October 2020 Patient Engagement Advisory Committee (PEAC) Meeting focused on patient trust in AI/ML technologies, the Agency will be holding a public workshop on medical device labeling to support the transparency of and trust in AI/ML-based technologies.

4. Develop methods to evaluate and address algorithmic bias and to promote algorithm robustness

Because AI/ML systems are developed and trained using data from historical datasets, they are vulnerable to bias. Health care delivery is known to vary by factors such as race, ethnicity, and socio-economic status; therefore, it is possible that biases present in our health care system may be inadvertently introduced into the algorithms. The Agency recognizes the crucial importance for medical devices to be well suited for a racially and ethnically diverse intended patient population and the need for improved methodologies for the identification and improvement of machine learning algorithms. This includes methods for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.
To develop these methods to evaluate AI/ML-based medical software, FDA will continue to support and expand the numerous regulatory science research efforts, currently conducted through collaborations with leading researchers at Centers for Excellence in Regulatory Science and Innovation (CERSIs), at the University of California San Francisco (UCSF), Stanford University, and Johns Hopkins University.

5. Work with stakeholders who are piloting the Real-World Performance (RWP) process for AI/ML-based SaMD

Gathering performance data on the real-world use of the SaMD may allow manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular marketing submission.
To help FDA develop a framework that can be used for seamless gathering and validation of relevant RWP parameters and metrics for AI/ML-based SaMD in the real-world, the Agency will support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis.