Insights and Analysis

FDA seeks public comment on monitoring strategies for AI-enabled devices

DigiCure: Legal insights at the intersection of Technology and Life Sciences & Health Care

""
""

The U.S. Food and Drug Administration (FDA) recently announced a Request for Public Comment on the evaluation of devices that integrate artificial intelligence (AI), including generative AI (GenAI)-enabled technology. FDA's request emphasizes the need for robust, real-world monitoring to ensure safe and effective AI use in clinical settings. Specifically, FDA seeks information on current, practical approaches to measuring and evaluating the performance of AI-enabled medical devices in the field, including strategies for identifying and managing “data drift.” FDA invites comments through December 1.

On September 30, FDA’s Center for Device and Radiological Health (CDRH) issued a request for public comment on “Measuring and Evaluating AI-enabled Medical Device Performance in the Real-World,” intended to obtain feedback on the current, practical approaches to measuring and evaluating the performance of AI-enabled medical devices in the real-world. This includes methods to detect, assess, and mitigate performance changes over time, ensuring ongoing safety and effectiveness throughout the device lifecycle. Importantly, FDA points out that AI system performance can be influenced by changes in clinical practice, patient demographics, data inputs, health care infrastructure, user behavior, workflow integration, or changes to clinical guidelines.

FDA writes that these changes – called data drift, concept drift, or model drift – may draw into question the performance, safety, and reliability of AI-enabled devices after deployment in real-world settings. FDA’s focus on data drift reflects growing recognition that AI models may degrade in performance as clinical practices, patient populations, or workflows evolve. Sponsors should anticipate the need for robust postmarket surveillance and adaptive quality systems capable of detecting and responding to drift events. This may require new infrastructure for ongoing data collection, model revalidation, and transparent reporting to FDA.

The request for comment emphasizes FDA’s concern that systematic performance monitoring is necessary to maintain safe and effective AI use. As a result, FDA seeks information on methods that are:

  1. currently deployed at scale in real-world clinical environments,
  2. supported by real-world evidence, and
  3. applied in clinical (patient- or health care worker-facing) settings.

FDA has posed six sets of questions addressing ways to perform ongoing, systematic performance monitoring to see how AI behaves in clinical settings—asking about:

  1. performance metrics,
  2. real-world evaluation methods,
  3. postmarket data,
  4. triggers for additional assessments,
  5. interactions between humans and AI, and
  6. other “best practices.”

These questions invite interested stakeholders to provide feedback on how sponsors apply best practices across the product lifecycle, including how sponsors detect and respond to performance degradation—such as data drift—and how human factors influence device reliability and safety. These areas reflect FDA’s evolving expectations for lifecycle oversight, transparency, and adaptive quality systems, especially in the context of generative AI technologies that may behave unpredictably in real-world use. Sponsors are encouraged to share concrete examples of monitoring strategies, thresholds for action, and mechanisms for integrating user feedback into postmarket surveillance. Specifically, FDA has invited feedback on how sponsors address the following issues, among others:

  • define and track performance metrics to measure the safety, effectiveness, and reliability of AI-enabled medical devices in real-world clinical use,
  • tools, methodologies, or processes to monitor real-world deployment and how this is balanced against human review of evaluation,
  • Sources of data for assessing postmarket performance, and processes for addressing data quality, completeness and interoperability,
  • Identification of triggers for reassessment and responding to any identified performance degradation,
  • understanding human-AI interactions and the impact of usage patterns on device performance and maintenance of performance through user training, and
  • implementation barriers for best practices.

Further, while FDA’s device center has historically been cautious about accepting real-world evidence for regulatory decisions, the current request for comment reflects FDA’s interest in understanding how sponsors monitor AI-enabled devices in actual clinical environments. Specifically, FDA seeks insights into practical approaches for ongoing performance evaluation—such as integrating device data streams with electronic health records, clinical registries, or other sources—to inform best practices and future expectations. The agency is focused on learning from real-world implementation and monitoring strategies, particularly those supported by evidence, rather than formally accepting real-world evidence for regulatory submissions at this time. As sponsors develop and deploy these monitoring systems, challenges related to data privacy, interoperability, and data quality remain central considerations.

Practical guidance for sponsors in light of ongoing regulatory developments for AI-enabled devices

This request for comment recognizes the rapidly evolving area of AI tools in the healthcare and medical industries, and the agency’s need to adopt knowledge gained from industry. FDA continues to update its guidance in software, cybersecurity, and AI to reflect the fundamental importance of these technologies, including continuing to emphasize the importance of prioritizing good machine learning practices and implementation of a post-market monitoring plan, as reflected in the last version of FDA’s draft guidance on AI-enabled medical devices, which is summarized online here. This request for comment also follows last year’s first Digital Health Advisory Committee meeting where FDA sought input on risk management and related post market performance monitoring of generative AI-enabled devices potentially building on frameworks like Predetermined Change Control Plans (PCCPs) to manage the unique risks of devices that can adapt and generate new outputs. FDA’s initiative aligns with broader regulatory trends both domestically and abroad. The White House Office of Science and Technology Policy (OSTP) on September 26 published a request for public feedback on regulations that could be hindering the development and deployment of AI products, providing an October 27 comment deadline. OSTP Director Michael Kratsios wrote on X that the “RFI invites America’s innovators to identify where legal or operational requirements have fallen behind advances in the industry, what regulatory clarity would help them do their incredible work, and how outdated regulations stifle AI development, deployment, and adoption.”

Sponsors should expect FDA to require detailed postmarket monitoring plans for AI-enabled devices, including predefined metrics, thresholds for triggering reassessment, and mechanisms for reporting adverse events or performance anomalies. Early engagement with FDA on these plans—potentially via pre-submission meetings—can help clarify expectations and avoid delays.

FDA has invited comments on its GenAI docket through December 1, 2025. When submitting comments, sponsors should provide concrete examples of monitoring strategies, describe any challenges encountered in real-world deployment, and suggest regulatory flexibilities that could facilitate innovation while maintaining safety. FDA has also announced that on November 6, it will hold its second Digital Health Advisory Committee meeting, this time focusing on generative AI in mental health. The Committee is being convened to discuss the benefits, risks to health, and risk mitigations that might be considered for digital mental health devices, including premarket evidence and postmarket monitoring considerations. FDA has invited comments on this meeting, with a deadline of December 8; however, only comments received before October 17 will be provided to the Committee for consideration during discussions during the meeting on November 6.

If you may be interested in submitting feedback, commenting on FDA’s digital health advisory committee meeting, or have any questions on AI-enabled medical devices, please feel free to reach out to the authors of this alert or the Hogan Lovells attorney with whom you regularly work.

Authored by Jodi Scott, Lina Kontos, and Kelliann Payne

This article is the fourin our series, “DigiCure: Legal insights at the intersection of Technology and Life Sciences and Health Care,” which aims to help you stay informed about the broad array of legal and regulatory issues affecting companies operating at the intersection of the technology and life sciences & health care sectors. From using AI in clinical studies, to evolving patient data concerns, to the entire digital health product lifecycle, our team will discuss novel issues arising in all parts of the world, including unique deal-making, litigation, and compliance concerns. Ensure you are subscribed to Our Thinking to receive these new insights!

View more insights and analysis

Register now to receive personalized content and more!