Insights and Analysis

Bar Council’s updated AI guidance – clearer expectations, limited change in practice

AI
AI

The Bar Council's November 2025 note on generative AI is an evolution of its original January 2024 paper rather than a wholesale rewrite.

The revised document, developed by the Bar Council's IT Panel with input from its Ethics and Regulation panels, is not formal BSB “guidance” but is presented as reflecting current professional expectations.

Launching it, the Chair, Barbara Mills KC, warned of “the dangers of the misuse by lawyers of artificial intelligence… and its serious implications for public confidence in the administration of justice”, and reminded the profession that “the public is entitled to expect the highest standards of integrity and competence in the use of new technologies”. The Bar Council accepts that AI's spread is “inevitable and occurring at a fast pace”, but stresses that barristers must remain “vigilant” as the legal and regulatory picture evolves.

The central message is the same: AI can be used to support practice, but only within existing professional and ethical duties, and misuse can have serious consequences.

The updated guidance talks tough on appropriate conduct. It draws on new case law, empirical studies and regulatory action, and makes clearer that its principles reflect current professional expectations. At the same time, much of what it says will already be intuitive to practitioners who routinely verify authorities and treat confidentiality as non-negotiable. The guidance does not radically change what careful barristers should already be doing.

Context and background – what has actually changed?

The 2025 note follows the structure of the 2024 version closely, but several strands have been updated and expanded.

1. Status and framing

The note is still not formal “guidance” under the BSB Handbook, but it now says that its principles reflect current professional expectations, particularly in light of recent High Court judgments, and stresses a “heightened imperative” to use AI responsibly, protect confidentiality and comply with law and ethics. In practice, this document is likely to be relevant when assessing the conduct of barristers in relation to AI.

2. Scope of tools

References now cover both general-purpose systems (ChatGPT, Gemini, Perplexity, Copilot) and legal-specific tools (Lexis+ AI, Clio Duo, Thomson Reuters Co-Counsel). The note recognises that generative AI is embedded in research, drafting and practice-management workflows, and makes clear that the same duties apply whether the tool is a standalone chatbot or part of a familiar legal platform.

3. Model description and hallucinations

The technical explanation is updated for newer, multi-modal models, and there is a clearer statement that LLMs are predictive tools, inherently prone to fabrication. The 2025 note repeats Mata v Avianca and expands the English authorities (including Ayinde v Haringey and MS v SSHD), and cites empirical work (such as a Stanford study) showing hallucination rates in leading legal research tools. Hallucination risk is presented as a live problem rather than a theoretical one.

4. Verification and legal research

A new section on “mandatory verification of outputs and human oversight” underlines that AI may assist efficiency but cannot replace independent research, analysis and judgment. Misleading the court via AI-generated material – even inadvertently – may amount to incompetence and serious professional misconduct, engaging Core Duties 1, 3 and 5. The same standard applies to legal-specific tools and public chatbots: the duty of care does not depend on the branding.

5. Confidentiality, personal data and cyber security

The position on confidentiality and personal data remains strict: strong warnings against inputting privileged or confidential information and a repeated instruction not to enter personal data into generative tools. The 2025 note adds emphasis on understanding how each tool handles inputs (including “protective settings”), reviewing terms and conditions for compatibility with Core Duty 6, rC15.5 and data protection law, and considering cyber risk, including AI-enabled phishing or business email compromise. The description of risk is more detailed, but the overall stance is highly cautious.

6. Regulatory landscape and court procedure

The guidance reflects wider developments, including progress of the EU AI Act, data protection enforcement against public AI providers, and the Civil Justice Council’s working group on AI in civil proceedings. It notes that some overseas courts already require parties to disclose AI use in submissions and suggests that similar requirements may develop in England and Wales. AI-related procedural rules are framed as a matter of “when”, not “if”.

7. Research infrastructure

The note reiterates that Inns of Court libraries and associated services remain the primary sources of authoritative materials and directs barristers to them as part of safe AI use. AI is therefore positioned as a supplementary tool rather than the starting point.

Analysis – clear guidance, but implementation left up to practitioners

Against that background, the guidance’s core principle is straightforward: barristers remain fully responsible for the content of their work, and cannot outsource judgement to AI.

On verification, the note’s language is stronger, but the underlying duty is unchanged. Most practitioners will already regard checking citations and statute wording as basic practice. What the note doesn’t do is say how that checking should change when AI is involved, or how to deal with automation bias – the tendency to trust confident, well-written AI output, particularly when you are under time pressure.

On confidentiality and personal data, the guidance sets a strict baseline and encourages closer attention to technical settings and contractual terms. It does not, however, offer an explanation of how different deployment models – from public web tools to enterprise tenants and on-premise systems – may affect privilege and other matters.

For many readers, the most acute risk will be the “privilege trap” inherent in using public tools with broad training rights. This is the risk that, by pasting privileged client material into an AI tool whose terms give the provider broad rights to use and analyse that data, this could mean an opponent later arguing that legal professional privilege has been waived over that material. The note comes close to that point without spelling it out.

The expanded section on leadership makes plain that heads of chambers and firms are expected to engage actively with AI risk. That sits comfortably within established governance structures. But in the context of the self-employed Bar, it inevitably raises questions about how far chambers can realistically supervise individual technology use where members are independent and prompt histories are private. The guidance identifies the expectation rather than indicating what might work in practice.

Finally, there is a structural issue which the note does not address directly. By cautioning strongly against the naïve use of public tools and by emphasising verification, confidentiality and cyber security, it nudges practitioners towards private enterprise-style environments if they wish to use AI at all. In practice, those deployments may be easier for larger commercial sets and firms to fund and support than for publicly funded practitioners.

Conclusion

The Bar Council’s November 2025 update to its generative AI guidance underlines that AI is now squarely a professional responsibility issue, and it ties that message to real case law, data and regulatory activity. The themes of verification, confidentiality and leadership involvement are treated as non-negotiable.

What the note doesn’t try to be is a how-to guide. It leaves space for different approaches to policy, oversight and technology, while marking out a clear minimum standard. For chambers and firms that are already thinking seriously about AI governance, it is likely to function more as a reference point, than as a step-by-step playbook.

 

 

Authored by Reuben Vandercruyssen and Lydia Savill.

View more insights and analysis

Register now to receive personalized content and more!