September always feels like a reset: a new term, fresh notebooks, sharper focus. For Legal, this season brings a subject we can’t avoid. Everyone is talking about it. But what is it really?
Agentic AI.
What We Mean When We Say “Agentic”
So far, legal AI has been a clever assistant. You drop in a contract, it redlines. You ask a question, it answers. You give it a task, it executes. Faster, cheaper, tireless — but always waiting on you. These tools are reactive. They’re passive. They are, at best, glorified copilots.
Agentic AI is not a copilot. It doesn’t wait for you to act. It acts.
The difference is not incremental; it’s existential. A chatbot waits for your prompt. An agent takes a goal — “make sure our vendor contracts comply with the new data policy” — and pursues it on its own. It fetches the contracts from the repository. It applies the playbook. It benchmarks against market data. It escalates the exceptions. It drafts a compliance report for the board. And it does all this without you clicking through each step.
This is where the provocation lies: reactive AI still made the lawyer the operator. You were the one in the cockpit, hands on the controls. Agentic AI changes the relationship entirely. You are no longer flying the plane — you’re writing the flight plan. And if you don’t, the plane will still take off.
That should unsettle you. Because once contracting processes can run end-to-end without you, Legal no longer governs by default. The seat of control shifts. Either you codify your judgment into rules that agents follow, or those rules will be written by someone else — Sales, Procurement, even the vendor configuring the tool.
In short: reactive AI supports. Agentic AI operates. And when the system operates without you, the question isn’t whether Legal is efficient enough. The question is whether Legal still has authority at all.
The Ripple Effect: BigLaw, CLM, and Regulation
If agentic AI shifts power inside companies, imagine what it does to the industry around them.
For BigLaw, the traditional model already looks fragile. The billable hour depends on labor-intensive review — armies of associates poring over contracts. Agentic systems obliterate that justification. When a client knows an agent can process an MSA portfolio overnight, why pay for 500 hours of junior time? The pyramid model cracks, and with it, the economics of the firm. Partners will insist “our judgment is what matters,” but clients will push back: judgment is valuable only when it’s codified into systems that scale. The firms that survive won’t be the ones clinging to time sheets, but the ones who productize their knowledge into agents that clients can deploy directly.
For CLM vendors, the implications are just as severe. The big pitch of the last decade was visibility and workflow management: store your contracts, track your metadata, manage approvals. Useful, but static. When agents can fetch contracts, run playbooks, generate reports, and close loops autonomously, the role of a traditional CLM shrinks to plumbing. Monolithic platforms that promised “end-to-end” contracting will be unbundled into modular agentic layers that deliver real outcomes. The winners won’t be the ones with the most features; they’ll be the ones whose agents actually act — seamlessly, intelligently, and at speed.
And then there are regulators. Auditability, accountability, explainability: these will stop being abstract concerns and start being legal requirements. If an agent declines to escalate a deviation in a data processing clause, who’s responsible? If it accepts a fallback that exposes the business to litigation, who carries the liability? Regulators will demand not just oversight but proof of governance. Lawyers will have to ensure their rules aren’t just encoded, but defensible — with audit trails that show why an agent acted as it did. That’s not just compliance; that’s survival.
So the question is no longer whether agentic AI will change how we work. It’s who will write the rules before it does.
Because agents don’t wait. They act. And when they act, they shift power. If lawyers don’t claim that power by codifying judgment now, someone else will — and the authority Legal has guarded for decades will dissolve overnight.
What This Actually Means in Practice
When I say lawyers risk losing authority if they don’t codify their judgment, I don’t mean in some distant, abstract future. I mean in the workflows already creeping into your business today.
Take contracting. If you haven’t already embedded your risk positions into a playbook, Sales won’t wait. They’ll configure the AI to “get the deal done,” setting thresholds that prioritise speed over protection. Procurement will do the same, tightening SLAs in ways that expose you to liabilities you’d never have signed off on. Even vendors will step in, bundling “default playbooks” into their tools, effectively outsourcing your risk tolerance to someone else’s template.
In that world, Legal stops being the governor of contracting. You’re not asked to weigh in, because the system is already acting on rules you didn’t write.
So what can you do now? You start codifying. You take the instincts you rely on every day — the fallback you always push for on liability caps, the concessions you’ll allow on indemnities, the red lines you’ll never cross on data ownership — and you turn them into explicit, structured rules. You write them down, test them, and embed them into your review systems. You stop letting judgment live in email chains and dusty PDFs, and you build living playbooks that can run on rails.
>> See how you can codify your rules into the Law Insider AI playbook in action
This isn’t busywork. It’s power work. The authority Legal has always claimed — to define acceptable risk, to shape negotiation posture, to govern the contracting process — will only survive if those rules are encoded into the systems that are about to act on your behalf.
Because here’s the reality: the agents are coming. They will execute whether you’re ready or not. The only question is whose rules they’ll be following.
How Law Insider’s Tools Are Evolving
This shift isn’t theoretical. It’s already underway in the tools you use.
Take the Law Insider Word Add-In. Today, it waits for you to open a contract and applies your playbook in real time. Tomorrow, it will fetch the relevant version from your repository, compare it against negotiation history, apply the rules, and propose next steps — all before you’ve even opened the document.
Or consider contract review tools. At present, they flag issues for you to resolve. Soon they will resolve the standard ones themselves — inserting fallbacks, fixing missing clauses, and returning the draft to Sales. Only truly novel issues will make their way to your desk.
Even AI assistants, which today simply answer your questions, will evolve into agents that chain tasks together: drafting a clause, benchmarking it against precedent, updating the playbook, and pushing the change live across your templates.
>> See our AI Assistant in action
What links all of these is autonomy. Execution stops being something you initiate, and becomes something the system drives — with you as the designer of the guardrails.
The Stakes for Legal
Handled well, this is the elevation Legal has been waiting for. Lawyers finally step out of the bottleneck of manual execution and into the role of system architects — shaping strategy, embedding judgment, and governing risk frameworks that scale.
But handled poorly, the shift is dangerous. Agents will act with or without Legal. If the rules aren’t set, the business will set them. If governance isn’t in place, risk will creep in by default.
This is why agentic AI feels different from every other wave of legal tech. CLM bloated. Automation sped things up. AI review tools made risk spotting easier. But none of them shifted control. Agentic AI does.
Back to School, Forward to the Future
So as we step into autumn, this is the subject on the syllabus that can’t be ignored. Agentic AI is not just another shiny tool. It is a shift in power over the contracting process itself.
The lawyers who thrive won’t be the ones clinging to line-by-line review as their moat. They’ll be the ones who codify their judgment, set the guardrails, and claim ownership of the systems that will soon run autonomously.
Because the question is no longer whether agentic AI will take control of contracting. The question is whether it will do so under your authority — or someone else’s.
Tags: AI, Agentic AI