Design Thinking Isn’t a Luxury. It’s the Missing Method for AI Adoption in Legal
This week, I interviewed Astrid Kohlmeier for the Future Contracts Podcast. If you haven’t had a chance to listen to our episodes, you can find them here.
Astrid, co-author of The Legal Design Book, is a visionary in the space of design thinking and an authority in how the methodology has and can be applied in the realm of law. Our conversation sparked some interesting thoughts around how Legal Design has evolved over the years and also how it must continue to do so in a world of AI. Particularly, in the way we apply it to effectively deliver AI technology within organizations.
In particular, AI adoption in legal teams is often framed as a procurement decision or a technical challenge. But more often than not, the barrier to adoption isn’t technology – it’s behavior. Why do legal professionals ignore tools they’ve been trained on? Why do they revert to manual workflows even when automation is available? Why does “transformation” stall after the first pilot?
The answer lies in how the tools are introduced, understood, and integrated into the working lives of legal professionals. And this is where legal design thinking becomes indispensable, not as an aesthetic flourish, but as a serious methodology for system-level change.
In this edition of Standards Spotlight, I will explore how design thinking has been (mis)applied in legal innovation, and how it must evolve to meet the moment of AI. Then I will lay out a detailed, step-by-step guide for using design thinking to make AI tools actually work inside legal departments, not just on paper, but in practice.
The goal is to offer a practical framework for deploying technology tools in a way that drives the meaningful change your team needs, not just surface-level adoption, but lasting impact.
What Is Design Thinking and Why Should Legal Teams Care?
Design thinking is a problem-solving method grounded in empathy, iteration, and experimentation. It’s been used for decades in product design, service design, and policy development. At its core, it flips the traditional “top-down” approach: rather than beginning with features or business goals, it starts with the people involved. Their frustrations, workflows, mental models, and context become the primary inputs to solution design.
At its core, it is a methodology or framework that allows us to first understand the problem, articulate it, take a pass at solving it, and then iterating to deliver a final solution – after which we gather continuous feedback to improve and maintain the solution. It looks like this:
Credit: https://www.ikangai.com/what-is-the-double-diamond-design-model/
Applied to law, design thinking has historically been used to simplify contract templates, improve access to justice, and create more human-centered legal services. It gave rise to more visual contracts, plain language initiatives, and even redesigned court forms. But most of these efforts have focused on surface-level clarity. In an AI-enabled world, design thinking must evolve beyond the visual to address deeper operational and behavioral patterns.
Legal teams are not being asked to redesign documents – after all, AI tools can do that in seconds now and they will only get better with time. Instead, they are being asked to redesign how they work. AI introduces new variables, new risks, and new dependencies. Without a structured approach to understanding, testing, and refining these changes, adoption becomes guesswork. That’s where design thinking earns its keep.
Step 1: Frame the Right Problem
Too many legal AI projects begin with a solution already in hand. Someone was told they need to use AI. Someone goes out and buys a tool. A training session is scheduled. The team is expected to use it.
Design thinking starts earlier. It begins with reframing: what is the actual problem we’re trying to solve? “We want to adopt AI” is not a problem statement. The better question is: where is the pain? Is it the volume of contracts? The inconsistency of clause language? The reliance on overworked reviewers? Is it slow redlining? Missed fallback positions? Poor visibility into risk? Angry sales teams?
Spend real time here. Interview your team. Shadow a paralegal reviewing a low-risk NDA. Watch a senior counsel push back on a commercial clause. Don’t settle for vague pain points. Get granular. The more specific your diagnosis, the more likely your AI implementation will succeed.
Step 2: Map the Current State Without Judgment
Next, build a visual map of the current workflow, warts and all. How does a contract get from intake to signature? Where are the decision points? Who touches what, and when? Where does context get lost?
This isn’t an org chart or a policy doc. It’s a lived process map, preferably co-created with your team. Capture what tools are used, how handoffs happen, what gets escalated, and where friction arises. You’re not doing this to shame inefficiencies. You’re doing it to understand the ecosystem into which your AI tool will be inserted. Because that’s what implementation is: not introducing a new system, but intervening in an existing one. The map will reveal where AI can help and where it will be ignored unless you change the surrounding behaviors.
Step 3: Identify Opportunities for AI But Stay Anchored in Use Cases
Once you understand the current state, look for leverage points. Where could an AI review assistant meaningfully accelerate, simplify, or enhance judgment?
Be precise. Don’t say “contract review.” Say “highlighting and redlining indemnity clauses in NDAs above $100K.” Don’t say “drafting.” Say “generating fallback language for exclusivity provisions in partner agreements.”
This step is where many legal teams go astray. They leap from capability (what the tool can do) to mandate (everyone must use it). Instead, tie AI applications to clear, bounded scenarios. A good pilot is not comprehensive, it’s targeted.
Here’s how legal teams can actually do this in reality:
- Start with a Contract You Already Hate
Don’t start with a blank sheet or a strategy session. Start with a real contract that your team reviews frequently and finds painful, one that generates repetitive edits, requires playbook lookups, or gets bottlenecked in approvals. Let’s say it’s your standard inbound SaaS agreement.
Print out five redlined versions from the past month. Sit down with the lawyers who worked on them and ask: What do you always change? What slows you down? What do you flag to commercial every single time?
This isn’t about mapping out the contract lifecycle. It’s about surfacing micro-decisions and friction points. “I always check whether the limitation of liability carves out indirect loss.” “I search for the governing law clause to make sure it’s Delaware.” “We waste 30 minutes every time the data processing clause is missing.”
You now have candidate use cases. They’re not hypothetical – they are lived.
- Narrow to One Use Case with Real Impact
Pick one of those friction points. Make it small and surgical. For instance: “Identify and redline the indemnity clause in any inbound SaaS contract to match our fallback position.”
Now turn that into a scenario: When the legal team receives an inbound SaaS contract from a vendor over $50,000, the AI assistant should scan for the indemnity clause, compare it to our approved language, and either suggest a fallback or flag issues for legal review.
You’ve just scoped a use case that is:
- Common enough to matter
- Constrained enough to test
- Measurable in terms of speed and consistency
Avoid trying to solve for “contract review” or “red flag spotting.” That’s too abstract. Anchoring the use case in a clause + context + threshold keeps the problem solvable.
Step 4: Prototype the Human-AI Interaction in Context
With one or two high-impact use cases defined, you can now prototype. Not in a sandbox but in the real tools your lawyers use: Microsoft Word, Outlook, your contract lifecycle management platform.
This is not about testing accuracy alone. It’s about testing interaction. If the tool is right but the workflow is wrong, adoption will fail.
Involve the users in this process. Invite feedback not just on what the tool outputs, but on how it fits into their flow. Does it feel helpful or intrusive? Does it save time or create new tasks?
Here’s what that looks like in real life:
1. Test the Use Case in the Tool – But Don’t Go Live Yet
Open the AI tool, in Word, and run this exact use case through it. Start with just two contracts. Does the tool spot the clause? Does it understand the context? Does it apply your fallback correctly? Does it misfire?
Invite the reviewing lawyer to narrate their experience. Don’t ask “do you like it?” Ask “what would you still have to double-check?” or “what did you expect it to do but it didn’t?”
Take detailed notes. Not just about model accuracy, but about the friction in the experience. Did the AI suggestions feel like help or like extra work? Was the fallback logic visible? Did the tool feel like it was speaking their language?
2. Write Up the Micro-Workflow
Once you’ve validated the use case technically and behaviorally, document it like a protocol. Not a white paper but a short, 1-page explainer. Include:
- The trigger (e.g., “inbound SaaS agreement > $50K”)
- The AI actions (“spot indemnity clause, suggest fallback”)
- The user decision points (“accept, modify, or override fallback”)
- The playbook logic used
- What happens next (e.g., AI flags added to summary sheet)
Now you have a discrete AI-powered workflow that can be repeated, evaluated, and improved. It’s not a feature. It’s a behavior.
3. Pilot It. Publicly. But Only That One Thing
Tell the legal team you’re running a focused pilot on just this use case. Give it a name. Set a time window – two weeks, five contracts, whatever. Capture metrics: how long review takes, how many edits are made manually, how often the fallback is accepted as-is.
Ask for feedback constantly. Adjust the prompt structure or fallback logic if needed. Once trust is established in this narrow slice, you can add more use cases: “governing law detection,” “DPA clause missingness,” “warranty clause ambiguity.”
Each one is tested and refined like a product feature, not imposed like a new policy.
This is what staying anchored in use cases looks like in real life: Start small, stay precise, involve the users early, and treat implementation as behavioral choreography, not software rollout. It’s slow at first but it builds a foundation that scales – and more importantly, that sticks.
Step 5: Design Playbooks and Guardrails as a Service Layer
AI in legal is only as good as the rules it follows. And those rules – playbooks, fallback positions, clause preferences – must be designed with care.
Here, legal design comes into its own. Translate complex policy into structured, understandable logic. Avoid jargon. Tie rules to real-world examples. Make them discoverable, editable, and version-controlled. And most importantly, embed them into the tool’s interface in a way that feels like guidance, not bureaucracy.
This is where you avoid the “black box” problem. Your lawyers don’t need to see the code. But they do need to understand why the AI is recommending a particular clause edit and when they’re allowed to disagree.
Step 6: Build in a Feedback Loop from Day One
Implementation does not end at rollout. In fact, that’s when the real design work begins.
Track how the tool is used – or not used. Set up regular user check-ins. Gather feedback in structured ways: surveys, Slack channels, quick debriefs. Use this data not just to improve the model, but to improve the experience. If a feature isn’t used, don’t assume the user is lazy. Assume the design is flawed. Iterate accordingly.
Step 7: Expand Use Only After You’ve Established Trust
Legal culture does not scale well with mandates. If you push a half-baked AI tool across a department before it’s trusted by the early adopters, you will create a quiet rebellion.
Instead, scale success. Once a few use cases are working, document the impact. Show how many hours were saved, how much faster redlines moved, how consistency improved. Let lawyers hear it from their peers, not just the project lead.
Then expand gradually – contract type by contract type, team by team. Design thinking is not just about speed. It’s about sustainability.
What Happens When We Do This Well
When legal design thinking is taken seriously, when it goes beyond sticky notes and flowcharts, it becomes a method for organizational learning. It lets legal teams adapt, not just react. It makes space for skepticism, refines messy workflows, and uncovers where judgment and automation can truly coexist.
AI won’t transform legal teams. But legal teams can transform themselves if they treat implementation as a design challenge, not just a technology one.
And that’s where the real value lies: not in the sophistication of the tools, but in the sophistication of the systems we build around them. Systems that center people, protect judgment, and evolve as we learn.
That’s not optional – it’s the work.