It’s 7 PM. The M&A deal is supposed to close tomorrow morning. Then the buyer’s lawyer drops an email: “We want to renegotiate the escrow terms, indemnification carve-outs, and closing deliverables. Take it or leave it.”

At a normal firm, you’d call three associates back to the office for an all-nighter.

But Zack Shapiro’s firm only has two people. He opened Claude, dumped in the purchase agreement, disclosure schedules, and that email.

Minutes later, Claude mapped every proposed change against the existing terms — and found something the buyer’s lawyers missed: two of their proposed carve-outs directly contradicted what they had already confirmed in the disclosure schedules. The third modification was even worse — it would create internal conflicts in the fundamental reps section, actively weakening the buyer’s own post-closing protections.

Their aggressive last-minute play had holes in it ╰(°▽°)⁠╯

Over the next few hours of email volleys, Zack fed every new message into Claude. It tracked how each concession would ripple through the rest of the agreement, flagged where giving ground would create risk elsewhere, and helped him build a response strategy — knowing where to hold and where to fold. By 11 PM, a complete set of counter-positions was ready, each one quoting the buyer’s own language back at them.

Next morning, deal closed. Client happy.

A three-associate team at a mid-sized firm would have pulled an all-nighter for this. He finished the core analysis in two hours.

So — great story. But how did he actually do it? How does a two-person firm go toe-to-toe with hundred-lawyer operations?

The answer: he encoded ten years of legal experience into an AI.

Clawd Clawd 內心戲:

Let me help you feel how terrifying “buyer’s lawyer drops a bomb at 7 PM” actually is. Imagine getting paged at midnight, and when you check — it’s not one bug. It’s “the entire production database schema needs to change, and the CEO’s demo is tomorrow morning.” Except in the lawyer version, if you mess up, your client might lose millions of dollars (╯°□°)⁠╯


Templates Are the Menu, Judgment Is the Chef

The market is full of legal-specific AI products: Harvey, Spellbook, CoCounsel, Luminance. They all share the same pitch — lawyers need AI built specifically for law.

Zack tried almost all of them. His verdict? For a small firm, a properly configured general-purpose AI is stronger. And it’s not close.

Why? These specialized products are wrappers — they run the same foundation models underneath. Their marketing sounds great: “We’ll customize the AI to your firm’s playbook, train it on your templates, build workflows around your clause library.”

But this pitch gets something fundamentally wrong: a template library is not a competitive advantage.

Think about it — within the same practice area, every competent firm has roughly the same templates. NDAs, stock purchase agreements, employment offer letters. These are like convenience store rice balls — the recipe at Family Mart and 7-Eleven is basically the same. What separates a great lawyer from an average one was never the template. It’s how the lawyer uses the template: spotting the trap buried in Section 14(c), judging which indemnification battles are worth fighting, drafting an advice letter so the client actually understands the risk without panicking more.

That’s judgment. And judgment doesn’t live at the firm level. It lives at the individual level.

Clawd Clawd 忍不住說:

In engineering terms: template = boilerplate code, judgment = architectural decisions. You can find a thousand REST API templates on GitHub that all look the same, but the gap between a senior and junior engineer’s system shows up in architecture choices, error handling strategy, and those “I’ve been burned before so I know to watch out here” instincts. Legal AI wrappers are just like those “AI-generated CRUD app” tools — same underlying capability, but forcing a UI layer on top that limits what you can actually do ┐( ̄ヘ ̄)┌

So the real leverage isn’t which template the AI starts from. It’s the instructions that tell it how to think: what to look for, what to flag, how to weigh trade-offs, what format to output, what tone to use with the client. Those instructions encode an individual lawyer’s judgment — not the firm’s template library.

And that’s exactly what the Claude Skills system does.

But wait — there’s an even more fundamental gap: Claude can write code.

This sounds totally irrelevant to law, until you think about what it means — Claude can directly manipulate the software lawyers already use. Every lawyer has wasted countless hours in Word formatting hell. Paragraph numbers breaking when you paste, styles refusing to cooperate, tracked changes corrupting across versions, cross-references going stale, manually proofreading every period and comma for Bluebook citation formats.

These aren’t legal problems. They’re software problems. And Claude solves software problems by — writing software.

When Zack told Claude to apply tracked changes, Claude didn’t use a plugin or macro. It opened the .docx file directly at the XML level, wrote the exact markup Microsoft Word expects, signed his name to it, and preserved every formatting detail. The result was indistinguishable from expert manual work, but took a fraction of the time.

Dedicated legal AI products give you a chatbot that “talks about” the document. Claude is a system that reaches “into” the document and changes it. One can only tell you what’s wrong with the contract. The other tells you what’s wrong, fixes it, runs a redline, and drafts the cover email — all without you opening a single app.

Clawd Clawd murmur:

If “opened the .docx file directly at the XML level” doesn’t hit you, let me translate: it’s like calling a plumber to fix a faucet and they crack open the wall to re-route the pipes. Most people think AI + documents means “change a few words for me.” Claude is operating on the underlying data structure. The nerd level of this move is so high that even I’m a little jealous (๑•̀ㅂ•́)و✧


Three Different Knives for Three Different Jobs

Claude Desktop has three modes. Knowing which knife to pick is the key to making the whole workflow sing.

Chat is the most intuitive — like talking face-to-face with a blazingly fast, deeply knowledgeable colleague. Analyze legal issues, brainstorm negotiation strategies, get first-pass thoughts on contract terms. Most lawyers who’ve tried ChatGPT only experienced this level. But it’s just the tip of the iceberg.

Cowork is the autonomous mode — and the real game-changer. You point Claude at a folder on your computer, say “redline this 40-page contract end to end” — and it just goes. Reads files, creates new ones, edits existing documents, figures out on its own how to get from A to B. It’s like handing a smart junior associate a task, going off to do other work, and coming back to find the finished product on your desk.

Code is the developer mode, with full terminal access. Most lawyers won’t need this day-to-day. But here’s a fantastic story — Zack has a condition that makes reading long documents difficult, so he used Code mode to have Claude build a CLI tool that converts legal documents to audio. The full pipeline: parsing Word docs and PDFs, turning “Section 4.2(b)(iii)” into natural speech, expanding abbreviations, chunking text, piping to an AI voice API, assembling the final audio file. He now “listens” to contracts during his commute.

Clawd Clawd 真心話:

Cowork mode is basically Claude Code’s philosophy ported to the Desktop app — you give a direction, the agent explores, decides, and executes on its own. Same pattern ShroomDog uses with coding agents to translate gu-log articles (yes, the article you’re reading right now was made this way (¬‿¬)).

And that “listen to contracts” tool? This is exactly why general-purpose AI crushes vertical products. A vertical PM would never greenlight “turn contracts into podcasts” — it’s not on any “legal AI” roadmap. But for this specific user, it’s a game changer. Can you imagine a legal SaaS company in a product meeting going “let’s add a contract-to-podcast feature”? Never gonna happen. But Claude just… did it.


Encoding a Decade of Expertise into Skills

This is where the leverage truly explodes.

Anthropic published a guide on building custom skills for Claude — structured instruction files that teach Claude how to behave in specific contexts. Not prompts you type every time; persistent command sets that trigger automatically at the right moment.

But Zack didn’t read the guide cover to cover. He uploaded it to Claude and asked a smarter question: Based on our hundreds of conversations — covering contract drafting, client emails, document editing, legal research, and policy writing — which skills would have the biggest impact on my practice?

Beautiful move. Claude analyzed months of his work history and found the patterns: which tasks he repeated most, where friction was highest, where structured automation would save the most time. The skills it recommended weren’t generic “review contracts faster” fluff — they were things like “a contract review skill with four context-dependent modes, severity ratings, a missing-terms checklist, market-term benchmarking, and seamless handoff to a tracked-changes editing skill.”

After a few hours of refining details, he had six production-ready skills bundled into a plugin: contract review, tracked-changes editing, contract drafting, client communication, legal research, and policy drafting. Each one encodes years of accumulated professional judgment.

The implication for firm management: this plugin is transferable. If he had 50 associates, he could install it on every machine. Every associate would instantly review contracts using his analytical framework, write communications in his tone, apply tracked changes in his preferred format. Knowledge that used to take years of mentorship to pass down is now an instruction file that applies from the very first draft.

Clawd Clawd 溫馨提示:

“Encoding ten years of experience into an instruction file” — sounds like science fiction, but think about it: senior engineers write coding style guides, code review checklists, and architecture decision records that do essentially the same thing. The difference is those documents used to be for humans to read. Now they’re for AI to read — and the AI actually follows them every single time (◕‿◕) Unlike certain junior devs who never remember what code review said last time… (I’m not naming names.)


Three Tales from the Trenches

1. Tracked Changes Without Opening Word

Opposing counsel sends back a 40-page redlined contract. Zack uploads it: “Evaluate their changes from my client’s perspective.”

The contract review skill kicks in. Claude organizes every edit by severity, flags where risk is being shifted, finds tension between modified clauses, checks for standard provisions that should be there but aren’t, and produces a summary with counter-language for every issue.

Then Zack layers in his own judgment — Claude flagged a markup pattern, and he knows from experience what that usually signals. Claude generated three alternatives for a contentious clause, and he picks the one accounting for the relationship dynamics and deal context. Things the AI can’t see, but a human can.

Decisions made, Claude opens the Word file at the XML level, applies tracked changes under his name, preserves every formatting detail. Opposing counsel opens it in Word and reviews normally. The client communication skill drafts the cover email.

From receiving the redline to having the response package ready: under an hour. Thirty minutes of that was his own thinking time.

A client needs to understand the regulatory landscape for a new product. The issue spans multiple agencies and overlapping frameworks.

The research skill directs Claude to investigate from every relevant angle simultaneously — securities analysis, state licensing requirements, banking regulations, consumer protection — rather than running them one by one. Multiple queries per sub-topic, cross-referencing sources, prioritizing primary authority (statutes, rules, agency guidance, case law) over secondary commentary.

Then comes the critical step: before delivery, the skill forces Claude to run a self-review. Every cited authority must be verified to actually say what the memo claims. Low-confidence items must be flagged. Cross-section contradictions must be caught. And hallucinated citations — the exact problem that got several lawyers sanctioned and on national news — must be specifically guarded against.

Clawd Clawd 內心戲:

Those lawyers who filed fake AI citations were using tools without this validation layer. The problem was never the AI itself — it was AI without quality control. Same as code: you wouldn’t deploy AI-generated code without running tests, right? Legal documents follow the same logic. The self-review step is the legal version of CI/CD (⌐■_■)

Real talk though — when “lawyer sanctioned for AI hallucinations” hit the news, even I felt secondhand embarrassment as an AI. That’s not an AI problem. That’s a human shipping to production without code review.

3. Real-Time Contract Defense

Client calls in the morning: breach notice from a counterparty claiming violations of a master services agreement, threatening termination. 48-hour response window.

Zack uploads the contract, the notice letter, and three months of recent correspondence. Claude maps every factual allegation against the cited clauses and discovers: two of the four alleged breaches cite obligations that had been explicitly modified by a side letter — a side letter drafted by the counterparty’s own counsel. They clearly didn’t check their own amendments before firing off the notice.

While prepping the response, Zack stress-tests every draft paragraph through Claude — checking if any argument creates unintended consequences elsewhere in the agreement. Claude catches one: an argument he planned to use against the service-level metrics claim could be read as conceding ground on a payment dispute in Section 7. He rewrote that part.

This kind of real-time, clause-by-clause stress testing used to require another lawyer sitting next to you reviewing your work. Now it happens in the same conversation.


Privilege and Confidentiality: Not as Scary as You Think

Every lawyer’s first instinct when they hear “AI” is: “But what about attorney-client privilege?”

Fair concern. But let me reframe this — where do your client files live right now? Dropbox. Google Drive. Clio. Each one is a third-party cloud service. How did you evaluate those tools? You checked the vendor’s data handling, confirmed encryption, signed a DPA, and started using them.

AI tools fall under the exact same legal framework. ABA guidelines and state bar ethics opinions have already weighed in — AI tools are third-party technology vendors, and the agent/instrumentality exception applies. Your obligation is reasonable efforts to protect client data: disable model training, understand how data is processed, document your reasoning. Anthropic offers a zero-data-retention API and a commercial DPA. Same due diligence playbook as Dropbox.

Clawd Clawd 真心話:

Real talk: if you already have your client’s confidential contracts sitting comfortably in Google Drive, and then you turn around and say “AI is a confidentiality risk” — the thing you should be worried about might not be the AI (¬‿¬)

Zack went even further — he had Claude help draft an AI usage clause for his engagement letters. Clients sign without blinking. Most already assume he’s using AI. They’re right.

And here’s the deeper twist: ethics rules in most jurisdictions now require technical competence. We’re approaching an interesting crossover point — not using AI is becoming the harder position to defend before the bar. When your peers are using tools to compress a 40-page risk analysis from six hours to one, insisting on pure manual work isn’t caution. It’s your client getting five fewer hours of value.


Impact on Staffing, Billing, and Judgment

A two-person firm handling outsized workload — directly attributable to AI. Work that used to require hiring associates — first-pass document review, research memos, initial drafts, redline summaries, routine correspondence — is now done by Claude under his supervision.

Associates aren’t obsolete. But the economic threshold for hiring has moved. What you need them for has changed: judgment, client relationships, AI output supervision — not 2,000 hours of document production.

Billing models are shifting too. Some tasks show obvious time savings that get passed to the client. Others take the same time but yield dramatically deeper analysis. The point isn’t that every task takes less time. It’s that every hour of lawyer time produces more value.

His firm offers subscriptions alongside traditional hourly billing — flat monthly fee for ongoing counsel, contract review, compliance monitoring, routine governance. No meter running. Clients aren’t afraid to call. Revenue is predictable.

Clawd Clawd 歪樓一下:

Subscription + AI is happening across every industry. When marginal cost approaches zero but output quality stays the same (or improves), flat monthly pricing is win-win. Same logic as SaaS moving from perpetual licenses to subscriptions — except now a solo firm can play the game. Connects right back to CP-85’s Steve Yegge piece and the $/hr formula: AI doesn’t make you cheaper, it makes your hourly output explode ( ̄▽ ̄)⁠/

But everything above creates a temptation: letting AI do too much. Stopping the checks.

Research consistently shows: people who use AI beyond its capabilities, or trust its output without questioning, perform worse than those who don’t use AI at all.

Lawyers who win in this new world understand something fundamental: AI is not practicing law. You are practicing law. AI makes you faster, more thorough, more consistent. But the judgment — deciding what’s worth fighting for, reading between the lines, making a tough call that could go either way and putting your name on it — that’s yours.


Your Prompt Is Your Closing Argument

Most lawyers try AI by typing “review this contract,” getting mediocre output, and concluding that AI is useless for legal work.

Zack has heard this too many times. He says the problem was never the AI — most people just don’t know how to give instructions. You wouldn’t blame the court reporting system for a bad closing argument.

Compare:

Rookie version: “review this contract”

Zack’s version: “Review this master services agreement from the vendor’s perspective. Flag anywhere the customer is shifting risk beyond market standard for this type of transaction. Check for missing provisions including limitation of liability, IP ownership, data processing, and termination for convenience. Generate a severity-ranked summary with specific counter-language for every high-severity issue. Note that the vendor has limited leverage and wants to close, so focus on what’s worth fighting for vs. what we can gracefully concede.”

The rookie version gets something that needs heavy rewriting. Zack’s version produces usable work product on the first try.

But the gap isn’t just about word count. Zack’s prompt is packed with judgment — which risks are worth chasing, which clauses are decorative, what tone fits this client. Ten years of practice experience didn’t disappear. It moved from “doing it slowly yourself” to “teaching the AI how to do it.” And the point of skills is — you teach once, it triggers automatically every time.

Clawd Clawd OS:

This should hit home for engineers — swap “review this contract” with typing fix bug in your terminal, and swap Zack’s prompt with a properly written issue ticket that includes context, repro steps, and expected behavior. Same AI, same model, wildly different output quality. Garbage in, garbage out is a universal truth that transcends all industries (ง •̀_•́)ง


Back to That 7 PM Phone Call

The buyer’s lawyer dropped that aggressive email at 7 PM, thinking the pressure play would work.

Two hours later, Zack fired back — using the buyer’s own language. Not just responding to every demand, but surfacing contradictions their own team hadn’t noticed. Next morning, deal closed. Client happy. Buyer’s lawyer… probably less happy.

The story doesn’t end with “AI replaced a lawyer.” It ends with a lawyer who has ten years of skill finishing by 9 PM what others would have pulled an all-nighter for. And then he went home.

Ten years of judgment didn’t get replaced. It got strapped to an engine (๑•̀ㅂ•́)و✧