AI Governance · Decision Framework

The Three Questions Every Law Firm Must Ask Before Using Any AI Tool

Before your firm adopts any AI system — for drafting, research, intake, or operations — three threshold questions determine whether you are ready. Most firms skip them. That is where risk begins.

Law firms are under real pressure to adopt AI. Clients expect efficiency. Competitors are moving. Vendors are selling. And yet the pace of adoption has outrun the pace of thinking.

The question most firms ask is: Which AI tool should we use? That is the wrong starting point. Before evaluating any specific product, three foundational questions must be answered — about retention, supervision, and workflow. Firms that skip this step are not just taking on compliance risk. They are building on an unstable foundation that will eventually require expensive correction.

What follows is the evaluation framework we use with every client before a single tool is selected.

Question One

The Retention Question: What does this tool do with your data?

  • Does the tool store inputs — including client names, matter details, and legal strategy?
  • Who can access what is stored — the vendor, their partners, government actors?
  • Is training on your inputs permitted under the default terms of service?

This is the question most firms treat as a checkbox. It is not. The retention question goes to the heart of confidentiality obligations under Rule 1.6. When a firm attorney inputs a client's name, matter context, or strategic concern into an AI system, that information may — under default settings — be stored, reviewed, and used to improve the model.

The vendor's privacy policy governs, not the attorney's intent. And most attorneys have not read it.

The practical inquiry is three-layered: Is data retained at all? (Many enterprise-tier products offer zero-retention options.) Who within the vendor's organization can access inputs? (This is often broader than assumed.) Does the default license agreement permit training on submitted content? (Many consumer-facing products do, by default.)

The answer to this question determines not just which tool a firm can use, but how — and with what client information. A tool with no acceptable data handling policy is not an option regardless of its capabilities.

"The vendor's privacy policy governs, not the attorney's intent. And most attorneys have not read it."
Question Two

The Supervision Question: Who reviews what the AI produces?

  • Who is responsible for reviewing AI-generated outputs before they are used or transmitted?
  • Which categories of output require mandatory human validation?
  • In what matters should AI not be used at all, regardless of capability?

The competence obligation under Rule 1.1 has always required attorneys to understand the tools they use. ABA Formal Opinion 512 confirmed that this extends to AI: attorneys must understand a system's limitations, verify its outputs, and not delegate judgment to it.

This question requires the firm to make concrete, in-advance decisions: When AI produces a research summary, who reviews it and how thoroughly? When a draft motion is generated, does a supervising attorney treat it as a starting point or a finished product? Are there matter types where the firm will not use AI in client-facing communications, regardless of what the tool can technically produce?

The supervision question also identifies accountability. If an AI-assisted document contains an error that reaches a client or a court, the question of who reviewed it will matter. Firms without a supervision policy have no answer. That is not a defensible position.

Question Three

The Workflow Question: What does this tool actually change?

  • Is this tool replacing attorney judgment, or supporting it? Can you articulate the difference in practice?
  • Does AI involvement in this workflow change the risk profile for the client — and has the client been informed?
  • Is the firm relying on this tool before its attorneys have developed baseline competence to evaluate its outputs?

This is the question that separates firms that are using AI thoughtfully from those that are simply using it. The workflow question asks whether adoption is improving legal work — or just accelerating it.

The distinction matters. An AI tool that helps an experienced attorney draft faster is additive. An AI tool that allows a less experienced attorney to skip the analytical work they have not yet done is a liability. The tool is the same. The workflow determines the outcome.

The workflow question also surfaces disclosure obligations. Depending on jurisdiction and the nature of AI involvement, clients may have a right to know. Several state bars have issued guidance suggesting that material AI involvement warrants disclosure. Firms should not wait for a mandate when professional courtesy already requires it.

The Framework in Practice

These three questions are not a one-time intake exercise. They should be revisited each time a new tool is introduced, each time a tool updates its terms of service, and each time the firm considers expanding AI use into a new practice area or matter type.

Firms that work through this framework before adoption emerge with something more valuable than a compliant tool selection: they emerge with a governance posture. They know what they are doing, why they are doing it, and who is responsible for ensuring it is done correctly.

That posture is what clients — particularly sophisticated ones — are beginning to ask about. It is also what bar associations, insurers, and regulators will look for when something goes wrong. The time to establish it is before that happens.

JDAI works with law firms to build AI evaluation and governance frameworks — from initial tool assessment through policy development and attorney training.

Schedule a Consultation

← Previous Article
Share LinkedIn Email
Next Article →