Lawyers in NYC are increasingly tempted to hit “record,” let an AI tool transcribe the call, and move on with a cleaner file and better follow-up emails. NYC Bar Opinion 2025-6 doesn’t ban that workflow—but it does make clear that “set it and forget it” is not the standard. The opinion’s practical message is that if AI is touching client communications, you must take deliberate steps to protect confidentiality, maintain competence, and supervise the technology like any other nonlawyer assistance.
What Opinion 2025-6 Says You Must Do First
NYC Bar Opinion 2025-6 (as discussed in continuing legal education circles and widely summarized by practitioners) is best understood as a checklist of “before you use it” obligations rather than a set of magic words you say after the fact. The first requirement is competence: you need to understand, at a functional level, what the transcription/recording system is doing—where audio goes, how it’s processed, what is stored, and what the vendor can do with it. In practice, that means reading the vendor’s data handling terms, understanding retention settings, and knowing whether your audio is used to train models by default (and if so, how to opt out).
Second, the opinion’s thrust aligns with core confidentiality rules: you must take reasonable steps to prevent unauthorized disclosure of client information. With AI transcription, that tends to translate into picking tools that offer enterprise-grade controls (encryption in transit and at rest, admin access controls, SSO, audit logs, and clear retention/deletion policies). It also means configuring the tool correctly—because even a “secure” platform can become insecure if a lawyer leaves sharing links open to anyone with the URL, allows automatic forwarding to personal email, or permits team-wide access without need-to-know boundaries.
Third, Opinion 2025-6 emphasizes supervision and communication with the client, including informed consent when needed. You’re responsible for the output and for the process, even if the vendor is doing the transcription. That typically means (a) supervising how the tool is used in your practice, (b) verifying accuracy before relying on a transcript for advice, filings, or summaries, and (c) considering whether the client should be told and asked for consent—especially where sensitive matters are involved, where the tool’s terms introduce meaningful confidentiality risk, or where local recording/consent laws could affect the call itself. Importantly, ethics rules and recording statutes are separate: even if your AI vendor is “ethical,” you still must comply with legal requirements on recording and notice.
Real-World AI Call Transcription: Reddit Pitfalls
If you look at real user threads on Reddit (particularly in communities focused on legal tech, privacy, and productivity tools), the most common pitfall is assuming “transcription” is just “typing,” when in reality it is often cloud processing plus indefinite storage. Users frequently discover—sometimes after the fact—that their audio files are retained for long periods, that transcripts remain searchable in a web dashboard, or that “improving the service” language effectively allows broad internal access. For a law office, those aren’t just annoying surprises; they can become confidentiality and supervision problems unless you’ve vetted the settings, locked down sharing, and negotiated or selected terms that match professional obligations.
Another recurring Reddit theme is accuracy and “hallucinated structure,” particularly when tools auto-generate summaries, issue lists, or action items. People report that speaker attribution is wrong, names are mangled, or the tool confidently inserts details that were never said (especially when audio quality is poor or multiple speakers talk over one another). In a legal context, that can lead to real harm: a misheard deadline, a misstated settlement term, or a “summary” that subtly changes the client’s account. Opinion 2025-6’s competence/supervision principles point to a practical rule: treat AI transcripts and summaries as drafts—use them to speed up review, not to replace it.
Finally, forum users often share “workflow hacks” that are red flags for a law practice—like forwarding transcripts to personal Gmail, storing call recordings in consumer cloud drives, using browser extensions that scrape meeting audio, or pasting sensitive excerpts into general-purpose chatbots to “clean up” the notes. Those habits are popular because they’re convenient, but they create uncontrolled copies and unclear access pathways. A conservative, Opinion-2025-6-friendly approach is to keep the entire pipeline inside managed systems: a vetted transcription provider (or on-prem option), firm-controlled storage, restricted access, and documented deletion/retention practices—plus a policy that prohibits staff from re-uploading client communications into unapproved AI tools.
NYC Bar Opinion 2025-6 doesn’t demand that you abandon AI call recording or transcription—it demands that you handle it like a lawyer, not like a casual power user. Do the upfront diligence, configure the tool to minimize exposure, supervise the outputs, and communicate with clients when the risk profile calls for it. If you build a controlled, well-documented workflow, AI transcription can be a legitimate productivity tool rather than an ethics headache waiting to happen.