AI Risk · Governance

The Hidden Risk in Legal AI Isn't Hallucinations — It's Informal Use

Every AI discussion in legal circles eventually turns to hallucinations. That focus is not wrong. But it is incomplete — and for most firms, the graver risk is already inside the building.

Ask any attorney what concerns them about AI and the answer, reliably, involves fabricated citations. The image is vivid and the stakes are obvious: a filing referencing a case that does not exist, discovered moments before oral argument. It has happened. It has made national news. It has produced sanctions and embarrassment.

But here is what the hallucination conversation obscures: that scenario is detectable. An attorney who reviews their work — who treats AI output as a draft rather than a product — catches it. The risk is real, but it is manageable through ordinary professional diligence.

The risk that is harder to see, harder to manage, and far more common inside law firms right now is something else entirely: attorneys and staff using AI tools without policy, without oversight, and without the firm even knowing it is happening. That is informal use. And it is where most firms are actually exposed.

The Concern Most Firms Have

AI produces confident, plausible, and completely fabricated legal authority. A brief gets filed. A partner gets sanctioned. The firm makes the news.

The Risk Most Firms Actually Face

Staff and attorneys use AI tools at will, with no policy governing what gets submitted, no supervision of what gets used, and no record that it happened.

Where Informal Use Appears

Informal AI use does not look like recklessness. It looks like efficiency. An associate drafts a client email with a consumer AI tool because it is faster. A paralegal runs a document through a summarization tool to prepare for a meeting. A staff member uses a free AI product to draft intake questions. A senior attorney uses an AI research tool before verifying the results — and the verification step gradually gets shorter.

None of this feels like a policy violation. In the absence of a policy, technically, it is not. But each of these scenarios creates exposure the firm has not evaluated and cannot defend against.

  • Drafting client communications Consumer AI tools have no confidentiality protections. When client names, matter details, or strategic context are submitted to a free tool, that information may be retained and used in ways the firm's engagement agreements do not contemplate and the client has not consented to.
  • Summarizing documents Document summarization is one of AI's more reliable capabilities — and one of the first places informal use takes hold. Without a defined review standard, summaries become relied upon as authoritative. The errors are compressions, omissions, and reframings that an unsupervised reader may not catch.
  • Research shortcuts The problem is not using AI for research. The problem is using it without knowing the tool's limitations, or allowing AI-generated research to substitute for independent verification. When this happens informally, there is no record of what was checked, what was not, and who was responsible.
  • Intake and client-facing processes AI-assisted intake raises both data and quality questions. What information is being collected, where is it going, and who reviewed whether the questions are appropriate for the jurisdiction and matter type? Informal use means these questions were never asked.
"Informal use does not look like recklessness. It looks like efficiency. That is what makes it difficult to address — and difficult to detect."

What Responsible Firms Do Differently

Firms with well-designed AI governance do not prohibit AI. They do something more disciplined: they decide in advance which tools are approved for which purposes, establish who is responsible for reviewing outputs, and create a record of those decisions that can be produced if challenged.

The distinguishing factor is not caution. It is intentionality. A firm that has thought carefully about AI use — and can articulate its reasoning — is in a fundamentally different position than one that simply has not gotten around to it yet.

01 — Written AI Use Policy

A policy specifying which tools are approved, under what circumstances, and with what review requirements — addressing data handling, supervision, and prohibited uses by role and matter type.

02 — Defined Review Standards

Explicit guidance on what "reviewing AI output" means in practice — what is being checked, by whom, and what documentation is required before the output moves forward.

03 — Role-Specific Training

Training differentiated by function. What an associate needs to know about AI in legal research differs from what a paralegal needs for document review, which differs from what a partner needs to supervise both.

04 — Approved Tool Registry

A maintained list of tools the firm has evaluated and approved, with the scope of that approval clearly defined — serving as both guidance and evidence of due diligence.

The Governance Opportunity

A well-designed AI governance framework does not just reduce risk — it creates a competitive and reputational asset. Firms that can articulate their AI policy to clients, demonstrate it to insurers, and present it to bar associations are ahead of a curve that is only getting steeper.

Several bar associations have already issued guidance indicating that firms should have written AI policies. Malpractice insurers are beginning to ask about them. Sophisticated clients are starting to request disclosure of AI use in their matters. The governance conversation is no longer optional. It is a question of timing.

Firms that move now build the framework on their own terms, with adequate time to do it thoughtfully. Firms that wait will build it reactively, under pressure, after something has already gone wrong. The risk is not the model. The risk is the firm that has not decided what to do with it.

JDAI helps law firms develop AI governance frameworks — from policy drafting and tool evaluation through attorney training and ongoing compliance support.

Schedule a Consultation

← Previous Article
Share LinkedIn Email
All Articles →