News

AI Summit panelists weigh emerging data & privacy risks for novel technology

Pathology slide review: Biomedical analyst in a detailed image, identifying disease patterns.
Pathology slide review: Biomedical analyst in a detailed image, identifying disease patterns.

Hogan Lovells and the AI Health Care Coalition recently hosted their fourth annual AI Health Law & Policy Summit, where thought leaders and policymakers gathered to discuss a variety of topics including data and privacy concerns unique to AI, reimbursement issues for AI-enabled software, evolving regulatory frameworks, proposed legislation, global developments, and more. In the second panel discussion at the Summit, Melissa Bianchi, partner in the Hogan Lovells health regulatory practice, met with leaders from Novartis, the College of Healthcare Information Management Executives (CHIME), and Value Analytics Labs to discuss the importance of robust data governance frameworks to ensure patient privacy and data security across various health care settings. Their conversation is summarized below.

AI Summit panelists

Artificial intelligence has a wide range of use cases in the health care context, and Melissa Bianchi, partner in the Hogan Lovells health regulatory practice, kicked off a panel conversation over AI-specific data and privacy concerns by outlining how the emerging technology can be used for accelerating research, generating draft regulatory submissions, post-market safety reports, logistics, and manufacturing, among other settings. And each environment brings with it unique concerns related to confidentiality, security, data integrity, regulatory compliance, bias, and consent risks, she explained.

Chelsea Arnone, director of federal affairs for College of Healthcare Information Management Executives (CHIME), discussed her work representing hospital systems and other organizations regarding AI and cyber-related issues. Clinical Decision Support (CDS) tools have been around for decades, Arnone pointed out, but new use cases are emerging. For example, Arnone explained, novel “ambient scribe” technology assists health care professionals by capturing and transcribing patient interactions in real-time. However, those AI recording training sets come with unique data concerns, Arnone warned.

Shifting the conversation to the research setting, Rachael Fleurence, former senior advisor at the National Institutes of Health, now at Value Analytics Labs, outlined applications of the use of generative AI for regulatory and reimbursement submissions, including systematic literature reviews, economic model development and validation, real-world evidence analysis, dossier curation, and beyond. Fleurence emphasized the transformative nature of products being developed using AI in some areas of basic science, citing, for example, how new AI technology can predict protein folding in a matter of minutes.

“Some areas of science might really be transformed in amazing and disruptive ways,” Fleurence said, particularly with respect to writing tasks, at which Large Language Models excel. This technological revolution may have a “democratizing” effect on science and research, Fleurence predicted, because scientists and researchers can now accomplish more than was ever possible without AI-enabled technology. Yet, she warned, accuracy and validity issues remain and maintaining confidentiality for this data is a unique challenge for this emerging technology. “We need better evaluation frameworks for outputs using generative AI,” Fleurence urged. 

Ashley Bashore, head of Data Privacy, Digital, & AI for the Americas at Novartis, similarly emphasized the importance of risk management in the use of AI, citing how new datasets present privacy risks. Bashore cited how AI synthetizing information to gain data from publications is helpful, but creates challenges related to managing the large troves of data that are created.

Regarding intellectual property and copyright issues, Fleurence highlighted the grey area around data use, while noting that researchers must be careful to avoid uploading sensitive information into proprietary large language models. Fleurence also highlighted the complexities of bias mitigation in AI, noting that it remains an area of active research due to the many ways bias can arise – from the data used to train models to how they are applied in real-world settings.

In the transactional setting, Arnone discussed how third party vendors will often refuse to accept contract risks, potentially resulting in health care providers choosing to not engage with their technology. In addition, Bianchi noted, if data cannot be de-identified, providers may elect to not utilize those databases. The panelists further discussed how state laws and human subject protection rules come into effect in this setting. Asked about how AI eases identification, Bianchi noted that it is becoming easier to “re-identify” data, therefore predicting additional confidentiality restrictions and state-level efforts that promote privacy.

Bashore similarly cited concern for global companies in trying to de-identify data so they can unlock additional value from it, and use it more freely. Toward the goal of data de-identification, Bashore promoted the principle of minimization, explaining how algorithms can be trained using dummy data to ascertain whether a use case is viable in the most minimal context. For this reason, Fleurence said she sees promise in the field of synthetic data.

Asked about data concerns relating to the risk of regulatory change, Arnone expressed optimism over the Trump administration’s pro-business outlook, while also pointing out that a complex patchwork of state regulations create compliance risks regardless of federal action. Fleurence previewed how forthcoming federal AI guidance is “detailed” and “fairly uncontroversial.” However, she also expressed concern that the federal reductions in force may leave the U.S. government without the ability to enforce its principles.

Zooming out to consider the global perspective, Bashore emphasized the importance of AI governance principles that start at a sufficiently high-level for agreement to be found across borders. To promote regulatory uniformity in the U.S., Arnone weighed the potential benefits of a national privacy law, while expressing uncertainty over whether such legislation could become law in the near future.

View more insights and analysis

Register now to receive personalized content and more!