Artificial intelligence is already embedded in legal workflows — from contract review bots to AI-assisted briefs. But a newer concept is gaining traction: agentic AI. It’s more than just clever algorithms; it’s software that acts with purpose and on its own to execute those workflows.
Let’s begin with a clean definition. The UK’s Information Commissioner’s Office defines agentic AI as:
“AI systems composed of agents that can behave and interact autonomously in order to achieve their objectives. These agents are small, specialized pieces of software that can make decisions and operate cooperatively or independently to achieve system objectives. Advances in agentic AI are driven by the integration of large language models (LLMs) with agent-based systems. By providing reasoning and discovery abilities, LLMs enhance an agent’s autonomy. This enables the agent to determine the most appropriate course of action to meet system objectives.”
Just a few years ago, we were discussing the power of prompt-driven generative AI. Agentic AI no longer necessarily requires the back-and-forth banter to work through a matter. These new tools can start with a prompt (which may or may not be from a human), devise a plan to address it, guide itself through a complex workflow then finish by executing the action it determines to be most appropriate.
In simpler terms: these are AI tools that can design their own workflows, apply tools when they need, and pursue goals with minimal human guidance. This is well beyond responding to prompts that we have been using with “traditional generative AI”. Agentic AI is AI that acts.
Why It Matters to Lawyers
Agentic AI is already being piloted in legal-tech applications:
- Drafting, reviewing and editing contracts
- Conducting legal research across databases and statutes
- Deposition analysis
This autonomy is powerful, but legally, it raises knotty issues:
- Liability: If the AI omits a crucial clause, who’s responsible – your firm, the software vendor, or the agent itself?
- Authority: Can a system’s output legally bind a client if its actions weren’t directly overseen?
- Accountability: Who monitors its decision-making, especially when outcomes go unseen?
These questions echo concerns raised in academic studies describing a “moral crumple zone,” where responsibility gets scattered across humans, developers, and AI systems.
What Practicing Lawyers Can Do Now
You don’t need to know how to build your Agentic AI agent, but you should know how to talk about it.
First, understand a simple fact: agentic AI isn’t just smart, it’s autonomous. It can plan, act on its own, and adapt.
Second, when advising on deployment consider these as part of your review:
- Governance matters: insist on logging, transparency, and human review points.
- Contractual clarity: clearly define who’s liable if an AI does something unexpected.
- Pilot first: test agentic AI on low-risk workflows before putting full trust in it.
What the OBA is doing to Protect the Legal Profession
In addition to our extensive educational resources, including our AI Academy training platform (free to all OBA members), the OBA is actively advocating for the right lines to be drawn by regulators and the government. We are committed to ensuring lawyers have the skills necessary to leverage AI to assist in efficient client service, add balance to lawyers’ lives and broaden access to lawyers’ services. We are equally committed to ensuring that AI remains a tool for lawyers, not one that purports to replace them. Regulation must draw effective lines to ensure that the public continues to be protected by the knowledge, judgement, accountability, empathy and other skills the human lawyer brings to client service, to the advancement of the law and to the protection of broader societal interests like judicial independence and the rule of law.