Insights and Analysis

Avoid the “false claims” of AI hallucinations when litigating FCA matters

""
""

Artificial Intelligence (AI) is touted as a revolutionary tool that has become increasingly popular in workplaces across a variety of sectors, including the legal field. A March 2025 survey by Law360 found that more than half of attorneys at law firms in the United States use AI for at least some purpose, and an American Bar Association survey found that 30.2 percent of attorneys' offices offered AI tools as of late 2024. Indeed, several law firms have bet on AI by investing significant amounts to develop proprietary AI tools for their legal workforces.

While commentators continue to predict that AI will fundamentally alter the legal profession, and in many cases greatly reduce or eliminate the need for lawyers, real world examples continue to demonstrate that AI is far from perfect. AI outputs can be prone to “hallucinations,” which are assertions that sound confidently true, but are actually false, fictitious, or misleading. In the legal sphere, such hallucinations may manifest as made-up caselaw, fake statutes, or incorrect legal assertions.

Documented instances of AI hallucinations in court submissions are on the rise. As of September 19, 2025, one researcher has identified 244 court opinions that note fabricated or incorrect material submitted due to AI, and one study identified 22 cases between June 30 and August 1, 2025, alone where fictitious cases were identified in courts.

False Claims Act (FCA) litigation – cases in which either the government or an individual suing in the government's name (a “relator”) seeks civil penalties and forfeitures from defendants who defrauded the government1 – is no exception. In one recent FCA matter, Smith v. Athena Construction Group, Inc., an attorney withdrew from representing a relator after submitting an opposition brief that defendants alleged contained non-existent caselaw, fake quotations, and incorrect legal assertions. The withdrawal motion explained that the attorney had utilized Grammarly and Lexis+, and that co-counsel, who had filed on the attorney's behalf, had not cite checked the brief because there was “no reason to know or suspect that the case citations were inaccurate or false.”

Likewise, in another recent FCA case, Khourey v. Intermountain Health Care Inc., defendants accused the relators' expert witness of submitting reports and other documents with fake citations and information. The expert witness admitted during a deposition to using ChatGPT to assist with his research, and the relators ultimately withdrew the expert report and testimony in question.2

These two recent qui tam actions highlight the potential pitfalls of AI usage in FCA litigation – both by attorneys and experts retained for a matter. These cases also demonstrate that parties litigating FCA matters must keep an eye out for hallucinations in their own filings, as well as those submitted to the Court by other parties.

Attorneys should be sure to check all citations and assertions of any filing before it goes out the door, regardless of whether they have a reason to believe a colleague relied on AI in creating any part of the material. A failure to do so could constitute violations of Federal Rule of Civil Procedure Rule 11 or the Professional Rules of Responsibility, as outlined in Formal Opinion 512 issued by the American Bar Association in June 2024,3 as well as result in sanctions from the Court or other discipline. Attorneys should also closely examine opposing parties' briefs to determine if any cited authorities or purported legal conclusions do not exist, and if so, bring it to the attention of the court.

FCA litigators should additionally check whether the court they are operating in has any standing orders or local rules regarding AI usage, as federal judges are increasingly adopting such orders to combat issues with AI hallucination.

Similarly, to avoid submitting expert reports that may inadvertently rely on false information, FCA litigants should consider requiring clauses in expert engagement letters that mandate the expert disclose any AI use.

In addition to hallucination risks, experts' use of AI raises credibility and reliability concerns if it is unclear how exactly an AI tool operates and if it can yield consistent, replicable results. Accordingly, if an expert does use AI, he or she must be able to understand and explain how an AI tool generated its result, ensure that the results can be recreated, and confirm its findings to ensure no fake or misleading sources of information were relied upon or cited. Transparency and verification are of paramount importance.

AI is not going anywhere anytime soon. It has the potential to be a useful tool in the FCA litigation space, but litigants must be on the lookout for these potential pitfalls to avoid inadvertently making legal “false claims” of their own.

 

 

Authored by Allison Caplis, Rachel Stuckey, and Paget Barranco.

References

1 Charles Doyle, Cong. Rsch. Serv., R40785, Qui Tam: The False Claims Act and Related Federal Statutes 1 (2021).

2 See Docket Text Order, Khourey v. Intermountain Health Care Inc., No. 2:20-00372-TC-CMR (D. Utah Aug. 7, 2025), ECF No. 287 (denying Defendants' Motion to Exclude Expert Testimony and Report of Thomas J. Dawson III as moot because Plaintiffs withdrew the expert report and testimony).

3 ABA Comm. On Ethics & Pro. Resp., Formal Op. 512 (2024) (discussing ethical considerations for lawyers when using generative artificial intelligence tools). The ABA explained that generative AI tools may implicate various duties under the ABA Model Rules of Professional Conduct, including competence, confidentiality, communication with clients, maintaining candor toward tribunals, supervisory responsibilities, and charging reasonable fees.

View more insights and analysis

Register now to receive personalized content and more!