Insights and Analysis

Judicial AI guidance updated – Caution still prevails

Image
Image

The Judiciary has updated its guidance on the use of artificial intelligence (AI) by judges and tribunal members, marking another step in the courts' cautious engagement with emerging technology. Issued on 31 October 2025, the revised guidance refines earlier advice on AI use and introduces fresh warnings around confidentiality, misinformation and “white text” manipulation. The accompanying statement from Birss LJ reinforces that any use of AI by the judiciary must remain consistent with the integrity of the justice system and the personal responsibility judicial office holders have for material produced in their name.

The October 2025 update builds on the previous version issued in April 2025 and introduces several changes:

  • “White text” risks: The guidance now includes a warning regarding “white text”, which is formatted to be invisible to human readers but still detectable by computers. This technique can be used to embed hidden prompts or keywords in documents, potentially manipulating search engines or large language models.
  • Clarification on AI hallucinations: The updated guidance provides further detail on the risk of AI hallucinations, where tools generate plausible but inaccurate information. Following hot on the heels of recent judicial criticism of the use of AI (see in particular the judgment of Dame Victoria Sharp in [2025] EWHC 1383 (Admin)), examples include fabricated case law, incorrect legal citations or misleading factual assertions.
  • Judicial engagement with evidence: A clear reminder has been added that AI tools cannot replace direct judicial engagement with evidence. Judicial office holders are advised to read underlying documents themselves and not rely solely on AI-generated summaries.
  • Glossary expansion: The refreshed guidance also expands the glossary of common AI-related terms, reflecting the ongoing effort to familiarise judicial users with emerging concepts. 

The guidance also removes all references to Microsoft's Copilot Chat. It is not clear why this has occurred – it was reported in April 2025 that judicial office holders were being encouraged to make use of the Copilot Chat genAI capability via their in-house eJudiciary platform.

Confidentiality and Public AI Tools

Significantly, the guidance reiterates that any information inputted into a public AI chatbot should be treated as if it were published to the world. This reflects a broader concern about how AI tools are used in professional and judicial settings, particularly when handling confidential material. As AI systems become increasingly embedded in the workplace, the distinction between public tools and tools where confidentiality can be properly protected becomes ever more important.

Public AI tools are typically accessed through consumer-facing websites or apps, such as open chatbots available to anyone. Information entered into these platforms may be stored, used to retrain models, or otherwise exposed beyond the user's control. By contrast, enterprise-grade AI systems – including those deployed under corporate licences or in secure “sandboxed” environments – generally operate within contractual and technical parameters designed to prevent external access or model training on user data. These differences underpin the judiciary's warning: not all AI is created equal, and not all use carries the same confidentiality risk.

Privilege and Legal Confidentiality

The judicial guidance implicitly assumes that non-public, enterprise AI tools – properly configured and governed – can be used safely, while public AI tools pose unacceptable risks. That distinction is sensible from a data security standpoint, but it also touches directly on the question of legal professional privilege, which remains one of the least settled issues in this emerging area.

At its core, privilege protects the confidentiality of lawyer–client communications in the interests of justice, not merely the parties. The fact that an AI system analyses data automatically rather than “reading” it in a human sense does not lessen a lawyer's duty to keep client information confidential.

This means the key question is not whether a system is “public” or “private” in name, but whether the user can reasonably expect confidentiality based on the technical and contractual reality of how the system handles data. If an AI platform operates within clearly defined controls that prevent external access, data mining or retraining on inputs, the use of that system should not in itself amount to waiver (of privilege) or disclosure or loss of confidentiality. Conversely, entering sensitive material into an open-access chatbot, where inputs may be retained or shared with developers, carries an obvious risk – though it is perhaps concerning that an emerging view appears to treat any interaction with a public chatbot as automatically fatal to confidentiality.

That position arguably overstates the extent of the risk. Developers of public AI models are permitted to use user inputs for certain limited purposes – typically model improvement, quality assurance or safety review – but those inputs are not intended to be allowed wider disclosure to the world at large. Their permissible use is governed by the applicable user agreement and does not extend to public dissemination. The assumption that such use necessarily results in a loss of confidentiality for all purposes – and thus a waiver of privilege – may therefore be too absolute. These issues have yet to be tested, but by way of example, if law enforcement were to compel production of AI conversation data, it would seem counterintuitive that they could obtain a lawyer's exchanges with a public chatbot but not those with an enterprise-licensed instance of the same model, assuming no third parties could access either data set. That binary distinction seems difficult to justify in principle.

Seen this way, the judiciary's approach – treating all public AI tools as inherently unsafe – errs on the side of caution but arguably oversimplifies a more nuanced reality. AI systems vary widely in their architecture and governance. Some operate as isolated, non-learning instances; others share data with third parties or continuously update based on user inputs. Privilege protection should not depend on the label attached to the AI tool but on whether confidentiality is in fact retained or lost.

In the meantime, practitioners should exercise caution in this area and take steps to ensure confidentiality is protected when using AI tools and records are kept of confidentiality safeguards surrounding any AI-assisted legal work – from contractual clauses and access controls to data governance protocols – to preserve privilege if it is ever challenged.

Firms and in-house legal teams are encouraged to review their internal policies on AI usage, ensure staff are aware of the risks, and implement appropriate safeguards to protect confidential information. As AI continues to embed itself into daily practice, the challenge will be to balance innovation with established legal protections – ensuring that efficiency gains never come at the expense of privilege or client trust.

 

Authored by Lydia Saville, Reuben Vandercruyssen, Thomas Evans, and Alex Cumming.

View more insights and analysis

Register now to receive personalized content and more!