White Paper · 2026

The Trusted
Machine

Midaire Counsel · Michael LeBrun, J.D.

How an artificial intelligence tool, properly deployed, makes legal practice faster, safer, and more defensible — for attorneys, for firms, and for the clients who depend on them.

Author: Michael LeBrun, J.D. — Managing Attorney, Midaire Counsel
Topics: Legal AI Ethics · Privilege · Malpractice Risk · Value Pricing
Published: April 2026 · 7 Sections · 6 Controls Framework
6
Risk Controls
3
Implementation Tiers
2
Key Cases Analyzed

Download the White Paper

Free access. Name and email only. We'll send you updates as the legal AI landscape evolves — and you can unsubscribe anytime.

No spam. No sales calls. Your information is never shared.

Thank you. Your download is ready — click below to open the white paper.

Open The Trusted Machine™ →
Executive Summary

The Legal Profession Is at an Inflection Point

The overwhelming majority of attorneys fall into one of two categories: those who are using AI without adequate safeguards, and those who are refusing to use it at all. Both positions carry significant risk.

Attorneys using consumer AI platforms without controls risk sanctions, bar complaints, and malpractice exposure. The cases are already in the reporters. In 2023, Mata v. Avianca established the foundational cautionary tale. In 2026, United States v. Heppner answered the privilege question attorneys feared most — and the answer was "No." But Heppner is not the whole story.

Attorneys who refuse to engage face a different but equally serious problem: a widening competence gap, rising costs relative to AI-augmented competitors, and a growing body of ethics guidance suggesting that technological competence is no longer optional.

"The attorney who refuses to engage with AI today is not protecting their clients. They are ceding advantage to opposing counsel who may already be using AI to research more thoroughly, prepare more completely, and review more systematically than any unaided human ever could."

This paper constructs a framework — The Trusted Machine — that addresses every identified failure mode through six documented controls. It is not aspirational. It is operational.

Seven Sections
I
The Ground Is Already Moving
The Luddite fallacy in law, the three fears driving avoidance, and why each is answerable.
II
The Legal Landscape Has Already Shifted
Mata, Heppner, and what the privilege analysis actually means for attorney AI use.
III
The Framework That Eliminates the Risk
Six controls that structurally address every known AI ethics failure mode.
IV
What This Means in Practice
A typical matter walkthrough and the connection to value-based pricing.
V
The Implementation Roadmap
Three tiers. Any firm size. Starting with no technology required.
VI & VII
Fear, Benefits, and the Honest Answer
Addressing the existential question directly — and what the profession stands to gain.
The Six Controls

A Framework Built on Documented Practice

Every known AI-related ethics violation in legal practice traces to one of two failure modes. The Trusted Machine structurally eliminates both through six controls that are documented, implemented, and available for review.

01
Commercial API Only
All AI work conducted through the commercial Anthropic API — not consumer interfaces. Contractual DPA, seven-day auto-deletion, no model training on client data.
02
Data Masking Protocol
Client-identifying information is stripped before every submission. Names, case numbers, and matter identifiers replaced with bracketed placeholders. Substitution key retained in the matter file.
03
Attorney Review of Every Output
No AI output reaches a client matter without review, verification, and approval by a licensed attorney. Every factual statement verified. Every citation independently confirmed. AI output treated as a first draft from a junior associate — without exception.
04
Matter-Level Audit Log
Every AI interaction is logged by date, matter reference, tool, task, reviewing attorney, and any material corrections. The evidentiary foundation for defense in any disciplinary or malpractice proceeding.
05
Written AI Use Policy
Approved tools, prohibited uses, data handling requirements, supervision standards, and client disclosure obligations — documented, reviewed annually, updated as ethics guidance evolves.
06
Client Disclosure
Every engagement agreement includes AI disclosure language. Express written consent obtained for particularly sensitive matters. Most firms using AI are not telling their clients. This is both an ethical obligation and a competitive differentiator.

The machine can be trusted. The question is whether the attorney operating it has earned that trust. The framework in these pages is how you do that.

— The Trusted Machine™, Conclusion · Midaire Counsel 2026

Get the Full Paper

Read The Trusted Machine™

The complete white paper covers all seven sections, both key cases analyzed in full, the three-tier implementation roadmap, and the value pricing connection. Free download — name and email only.