ABA Formal Opinion 512 lands at a moment when law firms are excited about generative AI’s speed—but nervous about its risks.
If you read lawyer discussions on Reddit (especially r/LawFirm and legaltech threads) and bar-adjacent forums, you’ll see the same themes repeating: “Can I paste a client email into ChatGPT?”, “Do I need to tell clients I used AI?”, “Who double-checks AI citations?”, and “What counts as ‘reasonable efforts’ to protect confidentiality?” Opinion 512 doesn’t ban generative AI; it clarifies how existing ethics rules apply—especially competence, confidentiality, supervision, communication, and fees.
Below is a practical breakdown of what the Opinion means in day-to-day operations, and a policy template you can adapt for your firm.
What ABA Formal Opinion 512 Requires in Practice
Generative AI use triggers competence and supervision duties, not just for the lawyer typing prompts but for the firm as a whole. In practical terms, ABA Formal Opinion 512 expects attorneys to understand (at a functional level) how the tools work and where they fail: hallucinated citations, fabricated quotes, incomplete research, hidden bias, and “confidently wrong” summaries. In forum discussions, lawyers often describe AI as “a first draft machine” or “a glorified autocomplete,” which is close to the right posture: you can use it to accelerate ideation and drafting, but you must verify every factual statement, citation, and legal proposition as if a new junior associate wrote it without sources.
Confidentiality is the most commonly debated point online, and the Opinion’s practical implication is straightforward: you can’t treat public, consumer-grade AI like a secure internal workbench. Firms need “reasonable efforts” to prevent inadvertent disclosure, which generally means assessing the vendor’s data retention, training usage, access controls, and security posture—and then limiting what information goes into the system.
The real-world concern you see in threads is that lawyers paste entire client fact patterns into a chat window during a rush. Opinion 512 pushes you to slow that reflex down: minimize inputs, anonymize when possible, and prefer enterprise tools (or on-prem / private deployments) with contractual protections.
Opinion 512 also sharpens client communication and fees issues. Lawyers on forums frequently ask, “Do I have to disclose AI use?” The Opinion doesn’t mandate a universal disclosure in every matter, but it does emphasize that you must keep clients reasonably informed and avoid misleading them—especially where AI use is material to the representation, affects confidentiality risk, or changes the scope of work. On fees, the practical takeaway is: you can’t bill a client for time you didn’t reasonably spend just because AI made you faster.
If your work shifts from drafting time to review/verification time, your billing narrative should reflect that reality, and your engagement terms should clarify what the client is paying for (outputs and professional judgment, not “hours of typing”).
Policy Template: AI Use, Review, and Client Notice
Start your firm policy by clearly defining approved use cases and prohibited inputs. A workable template (the kind people keep asking for in practice) separates low-risk tasks from high-risk ones.
Low-risk: generating internal checklists, drafting non-client-facing outlines, brainstorming deposition questions, improving plain-language readability, or converting attorney notes into a first draft—without including client-identifying information.
High-risk: anything involving confidential facts, privileged communications, or regulated data (health, financial account numbers, minors). The policy should state: (1) which tools are approved, (2) who can approve new tools, and (3) how lawyers must anonymize or minimize inputs when AI is used.
Next, build a mandatory human review and verification workflow that is explicit enough to follow under deadline pressure. Reddit threads about “hallucinated cases” show the same failure mode: someone trusted AI to produce citations, then no one checked Westlaw/Lexis. Your policy should require: (a) independent citation verification in authoritative databases, (b) factual cross-checking against the record, (c) redlining by a supervising attorney for junior lawyers and staff, and (d) a log or notation in the file of what tool was used and what was verified. Also include a rule that AI outputs cannot be filed, served, or sent to a client unless a lawyer has reviewed them for accuracy, tone, and compliance with court rules and protective orders.
Finally, add a client notice and consent section that matches how clients actually react in the real world (some are enthusiastic; some are wary). A practical policy doesn’t over-disclose for trivial uses, but it does require disclosure when AI use is material—such as using AI to draft substantive advice, to analyze sensitive documents, or when a vendor’s terms create non-trivial confidentiality risk. Include sample engagement-letter language your firm can toggle on: one paragraph explaining that the firm may use approved AI tools to improve efficiency; that lawyers remain responsible for the work; that confidential information will not be shared with unapproved systems; and that the firm will obtain consent before using AI in ways that materially increase risk or change the cost structure.
This is the middle ground many practitioners on forums recommend: transparent enough to build trust, but not so alarmist that it suggests the lawyer is outsourcing judgment to a machine.
ABA Formal Opinion 512 is less a “new AI rule” than a reminder that ethics duties don’t disappear when the workflow changes. Law firms that translate the Opinion into a practical policy—approved tools, input restrictions, verification steps, supervision, billing integrity, and a sensible client notice framework—are the ones most likely to avoid the horror stories circulating in online discussions.
If you treat generative AI as an assistant that accelerates drafts while your lawyers remain fully accountable for confidentiality, accuracy, and client communication, you can gain the efficiency benefits without gambling your license on a black box.