Panoramic: Automotive and Mobility 2025
As AI evolves, agentic AI has emerged as one of 2025's defining tech trends - autonomous AI systems capable of decision-making and executing complex tasks with limited or no human input. Financial institutions are already exploring and adopting agentic AI to boost efficiency, scalability, and innovation. However, with transformative potential comes legal, and regulatory risks, particularly when third-party AI agents act on behalf of customers. This article explores the legal risks financial institutions face along with risk mitigation measures, as they look to embrace agentic AI.
Due to the comparatively recent development of agentic AI, there is no universally accepted or harmonised definition. The UK government, for instance, defines agentic AI as AI systems composed of agents that can behave and interact autonomously in order to achieve their objectives. In contrast, the EU AI Act does not make a reference to agentic AI. Consequently, agentic AI is typically understood by its unique capabilities that distinguish it from other digital systems:
Agentic AI's autonomy and goal-oriented nature makes it conceptually distinct from generative AI applications, such as ChatGPT, which require explicit human instructions or prompts to produce an output. Equally, agentic AI separates itself from other examples of robotic process automation (RPA), such as scripted chatbots, which complete repetitive, structured tasks. By comparison, agentic AI leverages machine learning technology to complete complex tasks and adapt to a variety of use cases. Agentic AI can, therefore, be considered an advanced category of AI that combines the versatility and competency of machine learning technology with the utility of RPA.
The attributes outlined above make agentic AI well-suited for a variety of uses cases in financial services:
Agentic AI shares similar risks for financial institutions as those posed by generative AI. However, its degree of autonomy, limited human oversight, increased attack surfaces and the possibility of AI agents interacting with each other could see such risks exacerbated:
The risks for financial institutions relating to agentic AI arise not only in its implementation internally. There is an increasing number of AI agents on the consumer market that mimic human behaviour on their own web browser (for instance, Operator by OpenAI). This enables AI agents to carry out online tasks autonomously such as internet shopping and travel planning. This technology could lead to an AI agent, having been provided a customer's online banking credentials, interacting with financial institutions to complete financial tasks on the customer's behalf. Such a scenario would undoubtedly introduce novel legal and commercial risks:
The regulatory environment surrounding agentic AI remains ambiguous, as existing frameworks regulating AI more broadly have yet to explicitly address agentic systems. Agentic AI applications will likely come under the EU AI Act's definition of an "AI system" (although the definition does not expressly refer to autonomous agents). Under the EU AI Act, providers and deployers of an agentic AI system will likely be subject to obligations based on the system's risk level as determined by its intended use. Some use cases of agentic AI in financial services would likely be considered 'high-risk', such as evaluating creditworthiness (notably, excluding instances when it is done for detecting financial fraud) and conducting risk assessment and pricing for life and health insurance. Additionally, the EU AI Act allows for the Commission to add use cases to the high-risk category where they pose risks to health and safety or will have an adverse impact on fundamental rights, taking into account "the extent to which the AI system acts autonomously". Therefore, for use cases which do not currently fall within the high-risk categories listed in Annex III, the extent of an agentic AI system's autonomy will be one of key factors in determining its risk classification and the corresponding regulatory obligations under the EU AI Act. However, what degree of autonomy constitutes 'high-risk', and what 'acting autonomously' means in this context, remain unclear.
Unlike the EU, the UK does not currently have an overarching AI law in the UK. Instead, it has to date adopted a sector-specific, principles-based and pro-innovation approach to AI regulation. In addition to applying existing laws such as the UK GDPR, Equality Act, the PRA Rulebook and the FCA Handbook, including Consumer Duty, to the development and use of AI systems, the UK government’s approach to ensuring responsible AI use is centred around five cross-sectoral principles:
Although there is a lack of regulatory guidance on agentic AI specifically in the UK, developers of agentic AI systems should embed these principles by design and ensure ongoing compliance with all relevant existing regulations.
The delay by regulators to address agentic AI directly means that, to achieve compliance, financial institutions must interpret and apply existing laws and regulations as they apply to fast-moving developments and novel use cases.
Many of the risk mitigation measures that financial institutions are already taking with regard to GenAI are equally relevant to agentic AI, such as dedicated AI governance structures; human-in-the-loop oversight mechanisms and monitoring GenAI’s reasoning processes; thorough vendor due diligence and AI impact assessments; and record-keeping and activity logging. However, it is essential that existing mitigation measures are considered and updated in light of the compounded risk factors posed by agentic AI outlined previously.
Additionally, given agentic AI undoubtedly presents unique risks, financial institutions may need to adopt additional risk mitigation measures that may include:
Given the current state of regulatory ambiguity, horizon scanning will be important as financial institutions look to deploy agentic AI, as regulators are already beginning to take notice. For example, in its recent AI Sprint, the UK Financial Conduct Authority (FCA) among other things explored how agentic AI could give rise to better consumer outcomes. Therefore, financial institutions should prioritise proactive engagement with regulatory developments so they are well-placed to comply with upcoming regulations and navigate the interplay of existing frameworks.
Authored by John Salmon, James Black, Louise Crawford, Daniel Lee, and Felix Scrivens .
If you would like to discuss the issues outlined in this article, please get in touch with a member of the team for more information.
For additional resources related to AI legal risks and regulations, please visit our AI Hub.
This article is for guidance only and is a non-exhaustive summary only of certain aspects of the points discussed and should not be relied on as legal advice in relation to a particular transaction or situation.
Please contact your normal contact at Hogan Lovells if you require assistance or advice in connection with any of the above.