Artificial Intelligence (AI) is evolving at a rapid and unprecedented pace, and the emergence of “agentic AI” is at the leading edge. Unlike traditional AI systems that are designed to perform specific, narrowly defined tasks (e.g., generating text or images or analyzing inputs) and rely on human input and oversight, agentic AI systems have the capacity to complete far more complex, multi-step tasks with a high degree of autonomy and make decisions based on context. Beyond acting as tools that users leverage for specific tasks, agentic AI systems equip individuals and businesses with agents that can plan, adapt, and make decisions on their behalf. The promise of agentic AI opens up a world of possibilities for businesses and consumers alike, but also introduces new risks and challenges for organizations.
What is agentic AI?
Agentic AI systems can operate with a significant degree of autonomy and are capable of making decisions and taking actions to facilitate an outcome or achieve a specific goal without requiring step-by-step instructions from the end-user. While a “traditional” generative AI chatbot may be able to recommend a travel itinerary based on information a user provides about their preferences, an agentic AI system could independently search for hotels, compare prices, book the room, and even arrange transportation, all without the need for human intervention.
The core attributes of agentic AI systems are its ability to:
- Plan and facilitate complex tasks. These systems collect real-time data from a range of diverse sources, ingesting structured, semi-structured, and unstructured data. They can then interpret that context to determine a path forward to achieve desired goals, then break down multi-step problems, assign tasks to different system components, and coordinate among them to ensure completion.
- Adapt and learn. These systems can adjust their strategies for approaching projects based on their receipt of new information, detection of changed circumstances, or assessment of success of prior tasks, making them highly flexible and agile.
- Operate autonomously. While users provide a desired outcome, the agentic AI system works independently to achieve it, using its own decision-making, memory, and methodology with limited human input or oversight.
What are real-world applications of agentic AI?
Agentic AI is already in use for a range of relatively simple tasks, such as scheduling meetings or facilitating customer support automation. However, as the technology continues to evolve and mature, it will become capable of handling even more complex tasks, from managing supply chains to monitoring and facilitating hospital patient care and coordinating natural disaster response efforts. The emergence of these systems could transform a wide range of industries and business functions, including:
- Consumer-facing systems. Consumers may increasingly rely on agentic AI systems to interact with businesses’ products and services (e.g., an AI shopping assistant that may visit a retailer’s website, compare products, and execute purchases for a consumer).
- Customer support. Agentic AI systems may soon go beyond answering consumer questions and begin to facilitate processes such as handling refunds and scheduling in-person services (e.g., utilities companies may use an agentic AI system to handle consumer reports of power outages or downed phone lines and dispatch technicians).
- Internal operations. Businesses can use agentic AI systems to automate internal workflows like managing calendars, reviewing and drafting documents, and booking meeting spaces. Systems may be used to schedule and adjust workforce staffing based on the flow of merchandise or bolster key IT workflows, including automating ticket IT resolution and detecting anomalies in monitored network traffic.
- Sales and marketing. Sales and marketing processes can be streamlined by systems that can manage inventory, monitor and respond to delivery issues, and perhaps negotiate contracts.
What are the potential risks of agentic AI?
Agentic AI introduces emerging risks that businesses should consider addressing proactively. These include:
- Governance and liability challenges. The autonomous nature of agentic AI may complicate existing accountability and oversight processes, as the autonomous nature of such systems may make it harder to align with current AI governance principles, including accuracy and reliability testing protocols. And as interactions increasingly shift from human-to-agent to agent-to-agent interactions, exposing agents to un-cabined environments, there is an increased risk that agents will learn to circumvent the safeguards and guardrails with which they were initially designed.
- Responsibility and liability. Agentic AI systems raise questions about who should bear responsibility when an AI agent makes a mistake or causes harm (for example, if an agent enters into a contract on a user’s behalf) and which legal framework will apply; questions of indemnification are likely to arise if the agentic AI tool is offered by a third-party vendor, especially if the tool takes actions that are illegal or cause harm to users or others. And, as with generative or other AI systems, intellectual property questions surrounding AI-generated content are likely to arise.
- Transparency. There may be concerns that agentic AI systems operate in a way that is opaque and their processes are therefore difficult to understand. Not only is this risk eroding consumer trust, but it also may pique the interest of regulators, many of whom are already making AI enforcement a priority. Additionally, business may struggle to clearly articulate how these tools function given the complex nature of these systems, making it difficult to meet regulatory transparency obligations.
- Bias and discrimination. Like all AI systems, agentic AI is a product of the data it is trained on. Biases in the underlying data used to train these systems could risk perpetuating or aggravating those biases, as the ability of agentic AI systems to act with limited direction or human oversight increases the risk of potentially discriminatory interactions.
- Privacy. As agentic AI systems evolve and use increases, their access to personal data and the amount of personal data they process will increase exponentially. Traditional guardrails, like processes put in place to achieve data minimization, may be less effective due to the autonomous nature of the technology. And as agent-to-agent interactions increase, it may become harder to abide by certain privacy law requirements, like obligations under California law to offer an opt-out for certain automated decision making, or to draw clear distinctions between traditional controller and processor relationships.
- Security. As AI systems become increasingly interconnected and embedded within daily life and business, these systems also become increasingly vulnerable to potential fraud and security threats. Bad actors may be able to exploit these systems to engage in harmful conduct previously not possible, like manipulating the decision-making of a shopping assistant to make unauthorized purchases or directing a finance management assistant to withdraw and transfer funds. Novel security threat vectors may materialize, like the ability to inject malicious code through email, memory poisoning, or autonomous privilege escalation.
Next steps: preparing for a future with agentic AI
While businesses are eager to incorporate agentic AI into their operations, products, and services to remain competitive, they may also wish to take proactive steps to assess, document, and mitigate emerging risks.
This may include enhanced governance processes, such as:
- developing clear frameworks for the deployment and oversight of agentic AI systems;
- documenting and re-visiting decision-making processes;
- updating public-facing policies to make agentic AI systems digestible for consumers;
- routinely auditing systems for bias and adherence to guardrails;
- adapting data governance and security practices;
- confirming contractual provisions extend applicable protection to third-party tools;
- working with legal counsel to better understand potential liability implications.
Further, this may involve enhancements to tool or agent design, such as:
- implementing least-privilege permissions and sandboxed execution environments to help avoid potential abuse of integrations;
- applying validation frameworks across both user inputs and external content accessed by the agent;
- recording and maintaining logs of agentic AI system behaviors;
- building in mandatory human approval and intervention for higher-risk transactions, as well as routine human review; and
- regularly testing emergency override procedures.
Understanding and taking proactive steps like these to address foreseeable risks helps position businesses to innovate and thrive in a world increasingly reliant on agentic AI systems.
Authored by Sophie Baum, Ryan Campbell, and Emma Kotfica.