Lawyers Are Using AI. Their Firms Have No Idea What to Do With That.
Here's what's happening inside law firms right now.
Associates are using AI tools to draft correspondence, summarize documents, and conduct research.
Partners are using them for client prep, business development, and analysis.
It's happening daily.
And in most firms, there is no policy governing it.
No standards. No training. No accountability. No one has formally decided what's allowed, what isn't, and who's responsible when something goes wrong.
This isn't a technology problem.
It's a leadership failure.
The Data Is Not Subtle
Personal AI usage among legal professionals reached 31% in 2024 — up from 27% the prior year — while firm-wide adoption sat at just 21%.
That gap — lawyers using AI faster than their firms are governing it — is where the risk lives.
A separate survey of 800 U.S. attorneys found an even starker divide: 81% of in-house counsel report using AI for legal work, compared to just 55% of law firm attorneys.
The lawyers are moving.
The firms are not.
"We're Being Careful" Is Not a Strategy
When managing partners are asked about AI governance, the most common response sounds something like this:
"We're watching it carefully." "We're not ready to make a firm-wide decision yet." "We want to get it right before we roll something out."
That sounds measured.
It isn't.
Because while leadership is "being careful," the team is already using AI — just without guardrails, without standards, and without any accountability structure.
In 2026, the use of unvetted, consumer-grade AI tools by associates to meet deadlines is becoming a leading source of governance risk. For partners, the liability is no longer just legal error — it's governance failure.
Waiting to act on AI governance doesn't protect the firm.
It just means the exposure is happening in the dark.
This Is a Structural Accountability Problem
The reason most firms haven't built AI governance isn't that they don't care about it.
It's that no one owns it.
There's no person — and no role — responsible for:
defining what AI can and can't be used for
setting standards for how output gets reviewed
training the team on responsible use
monitoring for compliance
updating the policy as the tools evolve
Without ownership, nothing gets built.
This is the same dynamic that produces every other recurring operational gap in law firms.
Undefined ownership means undefined accountability.
And undefined accountability means the problem keeps not getting solved.
The Competitive Pressure Is Already Here
The firms that are building AI governance now aren't doing it out of caution.
They're doing it because it's becoming a differentiator.
In-house legal teams are increasingly likely to assign work to firms that demonstrate AI proficiency — and to pull work in-house from firms that don't.
That's not a future concern.
That's a current one.
Governance must be documented, systematic, and enforceable — informal policies and ad hoc experimentation no longer suffice. ING Collaborations
The firms that can demonstrate responsible AI use — to clients, to insurers, and to prospective talent — will have a structural advantage over those still "watching carefully."
What Governance Actually Looks Like
Building AI governance doesn't require a technology overhaul.
It requires operational clarity.
Specifically:
A clear policy that defines what AI can be used for, what it can't, and what human review is required before output leaves the firm.
Assigned ownership — someone whose job it is to maintain, update, and enforce the policy.
Training that is specific and practical. Not a one-hour CLE webinar. Actual guidance on what responsible use looks like inside this firm's workflows.
A feedback loop so that edge cases, errors, and questions surface to the right person rather than getting quietly buried.
None of that is complicated.
All of it requires someone to decide it's a priority and build it.
Controversial Truth
Most law firms don't have an AI problem.
They have a leadership problem.
The technology is already inside the firm. The team is already using it. The exposure already exists.
The only question is whether leadership is going to build a structure around it — or keep waiting for a better time that isn't coming.
Caution without action isn't risk management.
It's just a slower version of the same risk.
If your firm is navigating AI adoption without a clear governance structure, you're not alone — but you are exposed.
I help law firms build the operational frameworks that make AI adoption responsible, consistent, and defensible — so the tools your team is already using become an asset rather than a liability.