
Panoramic: Automotive and Mobility 2025
We have moved past the point where AI is the ‘next big thing' for the insurance industry. The shift from theoretical promise to practical deployment has already taken place; from underwriting and pricing to claims handling and customer service, AI tools are being deployed to streamline operations, enhance decision-making, and unlock new efficiencies. But as insurers embrace these technologies, they face a dual challenge: firstly, managing their own use of AI in an increasingly fragmented regulatory landscape; and secondly, underwriting the risks associated with AI adoption by policyholders (and doing so profitably).
This article sets out the regulatory landscape for insurers as users of AI, examines the risk profile in connection with insurers' use of AI tools in their businesses and considers the implications for liability insurers underwriting AI-related risks.
The EU and UK have adopted markedly different approaches to AI regulation, and the implications for insurers operating across jurisdictions must be considered carefully. The divergence presents a compliance challenge for insurers: they must navigate a detailed, codified regime in the EU, while interpreting broader and still evolving principles in the UK.
The EU AI Act is the EU’s flagship legislation for AI governance. It entered into force on 1 August 2024 and will be fully applicable from August 2027. It establishes a horizontal, risk-based framework and classifies AI systems into four categories - unacceptable, high, limited, and minimal risk - with corresponding obligations. The regime is prescriptive, with significant penalties for non-compliance (up to €35 million or 7% of global turnover), and applies to both public and private actors, including those outside the EU whose systems affect EU users.
Unacceptable risk systems (e.g. cognitive behavioural manipulation, social scoring, biometric categorisation) are entirely prohibited. ‘High-risk systems’ (which includes AI used in life and health insurance underwriting) are permitted but subject to strict obligations.
For some insurers, the implications are significant. AI systems used in pricing, policy drafting and claims handling may fall within the high-risk category, triggering compliance burdens. Moreover, the Act’s extraterritorial reach means UK-based insurers serving EU clients may still be in scope of the regime.
By contrast, the UK has so far opted for a sector-led, principles-based model. Rather than enacting overarching legislation, the UK has empowered regulators such as the FCA and PRA to adapt existing rules to AI use cases. While legislation targeting “frontier AI” (broadly, the most powerful AI models) is anticipated, the current framework relies on a combination of certain cross-sectoral principles intended to guide regulators in adapting existing rules to AI use (broadly, safety, fairness, transparency, accountability, and contestability), and a patchwork of IP, data protection, anti-discrimination and consumer protection law.
The UK has not yet introduced binding AI-specific legislation, with the promised ‘AI Bill’ having been pushed back to 2026. However, it is clear that there is mounting pressure to address concerns around AI risks, and in favour of more formal oversight mechanisms. There is also a Private Members Bill, the AI (Regulation) Bill), going through Parliament, which could gain support in the absence of Government-introduced legislation.
It is essential to understand the varying levels of risk associated with different AI use cases. This applies to any and all entities using AI, but is particularly acute for insurers; the spectrum ranges from low-risk internal tools to high-risk customer-facing applications with significant ethical and regulatory implications.
At the lower end of the risk scale are AI tools used for internal productivity and operational efficiency. Examples include applications for document collation and summarisation, fraud detection algorithms, internal-facing AI assistants, marketing content generation and customer support for general information. These applications typically involve minimal personal data processing and limited direct impact on customers. They are often used to streamline workflows, reduce manual effort, and improve turnaround times. While governance is still necessary, the legal and reputational risks are relatively contained.
However, as AI tools begin to interact directly with customers or influence decision-making, the risk profile increases. High-risk examples may include: claims processing automation, customer support with account access, personalised customer content or AI-driven counselling or decision support.
To provide an idea of the type of risks that insurers must manage:
Insurers must also consider how AI affects underwriting fundamentals. Hyper-personalisation of policies, enabled by AI, is on its face an attractive offering as it will allow insurers to tailor premiums and coverage with ever more granularity. However, if that is taken to the extreme, it has the potential to challenge the essential principle of insurance that the “premiums of the many pay the claims of the few.” This is because, if we progress to the point where AI models permit perfect or near perfect segmentation of risk, this may result in the exclusion or penalisation (via prohibitively high premiums) of high-risk individuals or businesses. While this may seem preferable when insurers are thinking about their bottom line, the purpose of risk pooling is essential to the value proposition of insurance; go too far in limiting risk on the part of the insurer, and the danger is that coverage gaps are the consequence, and the purpose, value, and attractiveness of insurance may diminish in the eyes of the purchasers.
Effective governance is essential to mitigate AI-related risks and ensure regulatory compliance. Insurers must embed AI oversight into their enterprise risk and compliance frameworks.
In short, governance must be proactive, iterative, and embedded. The regulatory landscape may be fragmented, but the direction of travel is clear: insurers must demonstrate that they understand and are managing the risks of AI.
This requires more than compliance with minimum legal standards. It demands a strategic, enterprise-wide approach to AI governance that aligns with evolving regulatory expectations and ethical norms. Insurers must be able to evidence not only that AI systems are technically sound, but that they are deployed responsibly - with appropriate safeguards, oversight mechanisms, and accountability structures in place. In practice, this means:
Ultimately, AI governance must be considered not as a discrete compliance exercise, but as a core component of operational resilience, consumer protection, and reputational integrity. The stakes are high, and the margin for error is narrowing.
It is very clear that increased AI use presents both opportunity and disruption. While its potential to enhance efficiency, accuracy, and customer experience is widely acknowledged, the technology’s rapid evolution and high risk-profile mean that businesses must tread with care when integrating it into their processes, or risk regulatory scrutiny, legal liability, and reputational harm.
Those risks translate into a significant issue for the liability insurance market, since liability insurers are likely to be picking up a large portion of that risk via traditional liability insurances.
More than that, claims arising from the creation, deployment, or use of AI are not just possible, but inevitable.
The manner in which this developing AI risk is addressed by insurers will be critical; traditional policy frameworks may struggle to respond adequately, and a sophisticated, adaptive approach is required.
As the technology (and also the regulatory environment) is developing so quickly, there are bound to be new claims resulting from the creation, deployment or use of AI. It can in many ways be seen as an accelerator or amplifier of existing / traditional liability risk; consequently, pressing issues for insurers are the different ways in which AI risk might manifest, the extent to which those risks are covered by existing insurance policies and which insurance lines are likely to be impacted.
One of the most pressing concerns is the emergence of “silent AI” exposure - where liability arising from AI is picked up under existing policies that were not designed to cover it. As discussed in our recent article, ‘Insuring AI risks: is your business (already) covered?’, these exposures are occurring across a wide range of business lines, including professional indemnity, product liability, D&O, cyber, and employers’ liability. Presently, such policies do not typically explicitly exclude AI-related risks, but they also do not affirmatively cover them. This creates uncertainty around where AI losses crystallise and how they are handled.
The current approach to AI liability resembles the early days of cyber risk, where exposures were also, initially, addressed on a case-by-case basis. Over time, the market responded with exclusions, sub-limits, and eventually affirmative cyber cover. A similar trajectory may unfold for AI. The roadmap is currently unclear, but bespoke AI insurance products are already being developed and exclusions or stringent sub-limits for AI liability seem likely to emerge in other lines, just as they did for cyber. One thing that is clear is that, however it is done, clarity in policy wordings will be key to managing the risk.
AI offers immense promise for the insurance sector, but also raises some significant challenges. Embracing good governance, investment in risk management, and engagement with regulators will all be key factors in ensuring responsible adoption. Liability insurers, in particular, will want to think carefully about the interaction of AI risk with traditional coverages and prepare for a future where AI is central to both risk and reward.
Authored by Lydia Savill and Math Steven.