Panoramic: Automotive and Mobility 2025
On November 6, 2025, the U.S. Food and Drug Administration's (FDA) Digital Health Advisory Committee (DHAC) convened for the second time since its inception to explore the regulatory pathways, opportunities, and concerns related to generative AI (GenAI) in digital mental health medical devices. We analyzed the DHAC's first meeting here. While these technologies, ranging from GenAI-based therapy chatbots to diagnostic tools, are emerging as potential solutions to the mental health access crisis, they also raise complex questions about safety, effectiveness, and ethics.
Individuals who are new to the FDA-regulated space are often surprised to learn that FDA has yet to authorize a GenAI-based device—for any clinical purpose. In today's fast-paced world where stress is ever-present and 61.5 million U.S. adults and one in five children ages 3-17 have been diagnosed with a mental health condition, the demand keeps growing for mental health care, and yet nearly 50% of people who need such support do not receive it. Against this backdrop, innovators are striving to address the need using GenAI, and many individuals report turning to large language models (LLMs) for mental health support—yet questions still abound as to whether, and how, FDA should regulate such products.
The agency emphasized that GenAI could help bridge gaps in care, offering earlier and broader access to therapy and even voice or facial analytics, particularly in rural or underserved communities. They may also more effectively reach patients who resist traditional therapy. At the same time, FDA cautioned—consistent with its overarching approach to GenAI—that innovation must be balanced with robust risk management throughout the product lifecycle. The agency stressed that the benefits offered by GenAI in mental health come with significant risks, many of which are unique to the mental health space: missed crisis cues, inadequate response (e.g., failure to detect suicidal or homicidal ideation), misidentification of medical symptoms (e.g., where underlying physical conditions mimic psychiatric symptoms), inadequate or incorrect therapeutic content (e.g., due to AI hallucinations or confabulations), and the potential for patients to form unhealthy parasocial relationships with AI systems.
As an example, the DHAC discussed a hypothetical prescription GenAI-based therapy for major depressive disorder (MDD) that simulates therapist-like conversation. Potential benefits could include always-on support, triage and monitoring in between visits, and tailored interventions. Risks may include failure to recognize medical conditions that mimic psychiatric symptoms, over-reliance on machine guidance, and deterioration of symptoms without appropriate escalation.
FDA emphasized that for GenAI-enabled devices in particular, the level of applicable regulatory oversight will be determined based on intended use, autonomy level, and risk profile. FDA and stakeholders highlighted that benefits—and risks—of GenAI-enabled mental health devices depend on factors such as whether the software is used adjunctively or as a standalone therapy, and whether it operates with or without clinician oversight. FDA noted that most currently authorized mental health products are cleared as adjuncts to clinician-supervised care, and stressed that a device's availability by prescription versus over-the-counter (OTC) will influence the risk/benefit profile and, thus, affect the regulatory requirements. Ultimately, low-risk GenAI-based tools intended for coaching or general wellness purposes may fall under enforcement discretion, but specific claims and statements make a difference in how a product is seen (by FDA and the public); and fully autonomous therapy or diagnostic systems fall squarely within FDA's regulatory scope.
To address the risks associated with GenAI, FDA emphasized a number of safety-by-design principles including risk management strategies aligned with ISO 14971, and recommended patient-centered design, informed consent, and transparent labeling that clearly states intended use, limitations, model role, data practices, and update policies (which could take the form of an “AI facts” label).
FDA also highlighted the need for predetermined change control plans (PCCPs) for adaptive GenAI features. In line with its August 2025 guidance document, Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions, or the "PCCP Guidance," the agency reiterated the importance of specifically defining the ranges within which model parameters may shift (without filing a new premarket submission), the inclusion of proper user notifications, and ensuring appropriate postmarket monitoring of performance stability takes place across institutions and populations, which should include drift/bias detection and rollback criteria.
The DHAC identified that while postmarket performance monitoring is expected of all medical device manufacturers, the high sensitivity of AI-enabled devices to changes in input data creates a unique vulnerability that can lead to degraded performance and create patient risk over time. To address this, the agency encouraged developers to include postmarket performance monitoring plans in marketing submissions. Of note, regulators want assurance that performance—not just during premarket validation but also in the real world—does not amplify disparities.
Finally, FDA noted its recent effort to solicit public comment on real-world performance evaluation for AI-enabled devices (Docket FDA-2025-N-4203), covering metrics, monitoring triggers, and human-AI interaction best practices. That docket closed on December 1, 2025, and submitted comments are now being reviewed by the agency to inform policy development or future proposed regulation in this area.
FDA's Office of Health Technology 5 (OHT5) emphasized that, given the limited number of authorized digital mental health diagnostic devices, new GenAI-based diagnostic devices will likely represent new intended uses and technological characteristics that pose new or different risks. Therefore, such novel devices may require De Novo classification and special controls to provide reasonable assurance of safety and effectiveness. These controls are likely to largely align with those already established under existing regulations for digital health mental devices and to include device-specific software validation, clinical performance testing prior to authorization, and postmarket performance monitoring plans, as well as risk management plans to ensure safety and effectiveness is maintained in real-world use of the application. In addition, FDA will expect product labeling to clearly communicate the benefits, risks, and appropriate use (including limitations, clinician involvement, and patient context).
OHT5 also described the types of validation considerations that the agency is likely to consider, which must be tailored to the particular device function under review. And, as FDA emphasized in the PCCP Guidance, transparency is key. In particular, data quality, integrity, and provenance should be documented; and developers should disclose how data are used, how models were trained, and how outputs are intended to inform care. Clinical utility is also an increasing point of focus, with FDA wanting to ensure that the outputs of software-based devices are actually meaningful to users, especially for novel De Novo devices.
The DHAC and FDA discussed how traditional study designs upon which FDA frequently relies to demonstrate safety and effectiveness (e.g., randomized, double-blind, placebo-controlled studies) may need tailoring and additional consideration to accommodate GenAI-enabled products. FDA sought stakeholder input on clinical evidence considerations for GenAI-enabled mental health devices such as chatbots, including the selection of an appropriate control arm, endpoint selection, how to execute blinding when conversational style is itself part of the intervention, and follow-up duration.
Patricia Arean, Ph.D., an invited expert and former Division Director for the National Institute of Mental Health Division of Services and Interventions Research, noted that the appropriate trial design may well depend on whether the device substitutes for therapy or augments care. She further emphasized that clinical study outcomes should address safety (e.g., number needed to harm, conversational error rates), efficacy, fidelity to therapeutic content (with clinician transcript review), applicability across populations, and patient-centeredness, including empathy and engagement.
The November 6, 2025, Advisory Committee meeting signals a recognition by the agency that GenAI—notwithstanding its complexity—must be proactively addressed by FDA and relevant stakeholders to facilitate innovation while maintaining the primacy of patient safety. The agency's messaging at the meeting is in line with other indicators of its stance and concerns, suggesting that developers should expect heightened scrutiny of clinical evidence, risk controls, and postmarket monitoring, alongside a continued focus and, hopefully, more granular guidance informed by stakeholder engagement on PCCPs and transparency standards.
For sponsors, the practical takeaway is clear: define intended use and autonomy levels early, embed safety-by-design principles, and prepare for lifecycle obligations that extend well beyond market entry. Those who invest now in robust and use-case-appropriate evidence generation, postmarket monitoring, and transparent communication will be best positioned to navigate FDA's evolving framework. Still, with FDA learning about GenAI alongside everyone else, longer review timelines and higher evidence thresholds should be expected—particularly as most such models (at least initially) will require authorization through the De Novo pathway. It also remains important for companies who believe they are justified in marketing without FDA authorization to document their rationale in a robust regulatory analysis (“Memorandum to File”), which can help reduce enforcement severity should FDA ask questions.
With GenAI-based products for mental health in particular, the agency is confronting a paradox: Most currently authorized digital therapeutics are indicated for adjunctive—rather than standalone—use, which may pose lower risks to patients; however, adjunctive applications may have a limited capacity to ameliorate the identified mental health support gap. Likewise, mental health chatbots that may most directly mitigate the shortage of accessible mental health treatment (i.e., intended for standalone use) are likely to be considered higher risk from a regulatory and ethical perspective.
Given the uncertainty inherent in AI-enabled outputs, FDA feels most comfortable when the product keeps a “human in the loop” at least to some extent. The agency continues to look for appropriate guardrails to be programmed into AI models, such as “red-teaming” to place proper bounds on what the model can output and escalation pathways for potential safety concerns. It remains to be seen how FDA will regulate such “crisis plan” features, particularly where they might suggest an intended use (e.g., detecting suicidality) for a product that otherwise is not meant to fall under the statutory definition of a medical device (e.g., provides general mental/emotional health coaching without a specific disease focus or claim).
Hogan Lovells has been assisting clients in navigating the FDA regulatory process for AI-enabled devices throughout the evolution in digital health over the last decade. If you have questions or would like us to help you evaluate an issue related to AI or GenAI in the context of medical device regulation, please contact one of the authors of this alert or the Hogan Lovells attorney with whom you normally work.
Authored by Kelliann Payne, Suzanne Levy Friedman, Gregory Kass, and Evelyn Tsisin
This article is the 13th in our new thought leadership series, “DigiCure: Legal insights at the intersection of Technology and Life Sciences and Health Care,” which aims to help you stay informed about the broad array of legal and regulatory issues affecting companies operating at the intersection of the technology and life sciences & health care sectors. From using AI in clinical studies, to evolving patient data concerns, to the entire digital health product lifecycle, our team will discuss novel issues arising in all parts of the world, including unique deal-making, litigation, and compliance concerns. Ensure you are subscribed to Our Thinking to receive these new insights!