AI Did Not Arrive in Your Practice by Permission
Why Governance, Risk, and Judgment Now Live Inside the Solo Lawyer
As lawyers begin to methodically incorporate artificial intelligence into daily practice, many correctly understand it as a force multiplier rather than an existential threat. In solo and small-firm environments, however, that framing can obscure a more consequential shift.
AI is not entering legal practice through deliberate adoption decisions, policy rollouts, or formal approval processes. It is entering quietly, incrementally, and without supervision, embedded inside tools lawyers already use. That reality fundamentally changes where professional risk lives and how it manifests.
In large law firms and Big Law environments, AI governance is typically treated as an institutional problem, addressed through committees, policies, and internal oversight. In small and solo law firms, which make up the vast majority of legal practice, that assumption breaks down immediately.
There is no compliance department to absorb uncertainty, no internal review structure to catch drift, and no secondary decision-maker to share responsibility. The lawyer is the user, the reviewer, the approver, and the person who ultimately bears the consequences. Governance does not disappear. It collapses inward.
AI Entered Practice Before Anyone Approved It
Most lawyers did not make a formal decision to adopt AI as a reasoning aid. It appeared instead inside tools they were already using. Microsoft Office 365 is the most visible example, where Copilot simply arrived through routine updates and became ubiquitous across email, documents, and calendars. The same pattern has followed with document storage platforms like Box, practice management systems such as Clio, and research platforms including Lexis AI. The list continues to expand as vendors layer AI functionality into existing products rather than offering it as a separate choice. By the time a lawyer consciously thinks about whether they are “using AI,” it has already influenced nearly every app the lawyer touches on a daily or weekly basis.
This matters because most risk frameworks assume a moment of intentional adoption. In practice, AI enters upstream, before conscious evaluation, and begins shaping judgment long before any document is filed or argument is made.
From Resource-Dense Risk to Individual Exposure
In large, resource‑dense firm environments, governance failures are often distributed across multiple roles and layers. In solo and small‑firm practice, risk concentrates. The primary danger is not that an AI tool violates an external rule. It is that the lawyer has begun using AI without a conscious decision or a clear understanding of how that use affects client confidentiality, the lawyer’s nondelegable duty to supervise work product, and a range of related ethical and professional obligations that are triggered by delegation to software.
In small‑firm practice, this unexamined delegation is itself the governance failure, because it operates below the level of conscious judgment and therefore evades the safeguards lawyers ordinarily rely on to manage professional risk.
“Human in the Loop” as Self-Discipline
Much has been written about the importance of keeping a human in the loop, but the phrase is often left underspecified. In practice, the loop concerns two related activities: the adoption of AI into legal workflows and the review of AI-generated output once those tools are in use. In large, resource-dense firms, those functions may be separated across roles, committees, or review layers. In solo and small-firm practice, they collapse into the same person.
In that setting, “human in the loop” does not describe a system. It describes a discipline exercised by the individual lawyer. There are no formal checkpoints, sign-offs, or layered reviews. There is only the lawyer deciding whether to slow down, whether to interrogate an output, and whether to independently reconstruct the reasoning before relying on it.
Human-in-the-loop therefore becomes a matter of cognitive restraint rather than procedural design. The lawyer must decide when they are actively evaluating the technology’s contribution to their reasoning and when they are merely approving work that sounds plausible.
Why Inventory and Disclosure Models Fail
Some approaches to AI risk emphasize inventorying tools or disclosing usage. In small-firm practice, these approaches are largely symbolic. AI is embedded across the software stack, often invisibly. Attempting to catalogue every instance misses the more important question.
The relevant inquiry is not where AI exists, but where it meaningfully shapes professional judgment. That influence usually occurs upstream, during issue spotting, narrative framing, and risk assessment. These moments rarely leave a record, yet they shape outcomes far more than any final draft.
Metadata, Hindsight, and the Shape of Liability
As AI use becomes normalized, expectations are already shifting. Courts are beginning to signal that routine AI use by lawyers is neither exceptional nor suspect. For example, in January 2025, the Illinois Supreme Court made clear that
The use of AI by attorneys, judges, and court staff may be expected, should not be discouraged, and does not require disclosure in pleadings so long as legal and ethical standards are met.
That normalization, however, sharpens rather than softens professional risk. If AI use is assumed, the question will not be whether a lawyer relied on AI, but whether they did so competently. Liability exposure will increasingly arise from hindsight reconstruction.
The issue will not be that a tool malfunctioned, but that the lawyer failed to notice what a competent professional should have questioned, tested, or verified. The absence of visible error will not be a defense.
Over-Trust and Cognitive Drift
The most significant risk associated with AI is not hallucination. It is over-trust. When AI consistently produces serviceable work, verification habits erode. Independent reconstruction feels redundant. Intuition dulls. Errors, when they occur, are harder to diagnose because nothing appears to have gone wrong. The tool behaved as expected. The lawyer simply stopped asking certain questions.
This is not a character flaw. It is a predictable cognitive response to fluent automation.
Client Consent and the Limits of Disclosure
As AI becomes more visible in legal practice, some lawyers have asked whether client consent to the use of AI meaningfully mitigates risk. The intuition is understandable. If the client agrees to the use of AI tools, the lawyer may feel insulated from later criticism. In practice, that protection is limited.
Client consent may address expectations, but it does not alter the lawyer’s nondelegable duties. A client cannot waive the lawyer’s obligation to protect confidentiality, to supervise work product, or to exercise independent professional judgment. Disclosure may be appropriate in certain contexts, particularly where client data is exposed to third-party systems, but consent does not transform AI-generated work into something less the lawyer’s responsibility.
There is also a practical risk in over-disclosure. Framing AI use as exceptional or experimental may invite scrutiny without providing corresponding protection. Courts and regulators are unlikely to treat client consent as a substitute for competence. The more durable safeguard remains the same one discussed throughout this piece: disciplined review, independent verification, and conscious control over how AI is integrated into legal reasoning.
Governance as Cognitive Hygiene
For solo and small-firm lawyers, governance is best understood as cognitive hygiene. It is not about policies or approvals. It is about deliberately reintroducing friction where tools are designed to remove it. That may mean articulating a theory of the case before consulting AI, documenting why an output was accepted, or independently reconstructing an argument before relying on it.
These practices are not ethical gestures. They are defensive measures against a new category of professional risk.
Conclusion
AI will not replace lawyers, but it is already redefining competence. For solo and small-firm practitioners, the danger is not displacement. It is subtle alteration of judgment without conscious awareness. The central governance question is no longer how to regulate the technology, but how to ensure that the lawyer remains the author of their own reasoning.
About the Author
Patrick T. Barone is a nationally recognized criminal defense attorney and a leading authority on DUI defense. He is the founder of Barone Defense Firm and writes extensively on the responsible, practical use of artificial intelligence in criminal defense, forensic analysis, and litigation strategy. To learn more about how AI-informed tools are being integrated into real-world criminal defense practice, visit Barone Defense Firm.


