Jerome
Back to blogs

Is Your Product Even Legal? What a Car Accident Taught Me About Product Design

7 min readproduct-design · legal-compliance · UPL · AI-products · regulated-industries · first-principles · gstack

How It Started

I was in a car accident in California. The other driver was fully at fault, but I had no idea how the claims process worked. I ended up settling for far less than I should have gotten. I shared the experience on social media afterward, and two years later, people are still reaching out asking how to handle it.

That two-year-long demand signal felt worth building on. My core thesis: in simple car accident cases where fault is clear and the other party is liable, most of the work doesn't require a lawyer. What people need is process knowledge. Systematize that knowledge, use AI to guide users through the full workflow, and you have a "TurboTax for car accident claims."

Running a Product Review with gstack

Around this time, YC CEO Garry Tan open-sourced gstack, a toolkit for Claude Code with over twenty skills covering everything from product design to QA to launch. I decided to run the product through its /office-hours skill.

/office-hours simulates the YC office hours format: six forcing questions that push you to answer things like "who's using this," "how do they solve it today," and "what's your narrowest entry point." It won't nod along and say "great idea." It challenges your assumptions and surfaces the blind spots you didn't know you had.

The session produced two layers of insight.

The first was about product design itself. /office-hours pointed out that the AI chatbot format was wrong for the target user: car accident victims aren't tech-savvy people, and if they need to "figure out what to ask" before they can use the product, the product has already failed. The recommendation was to switch to a guided workflow, with AI embedded at each stage as contextual help rather than a general-purpose chat box. That insight directly changed the product's shape.

The second layer was more critical: it surfaced a legal risk I had never seriously thought about.

The conclusion: this product needs an attorney's sign-off before it can launch.

The Trap

The legal risk has a name: UPL, Unauthorized Practice of Law.

In the United States, a person or organization without a law license cannot provide legal advice about a specific person's specific situation. The line looks fuzzy, but the boundary is actually quite clear:

  • Teaching a user "what is California's comparative fault rule": legal. That's legal education.
  • Telling a user "based on your situation, you should claim $50,000": illegal. That's legal advice.

Stanford CodeX's analysis of a recent case introduced a clear framework called the Uncrossable Threshold: your product is either designed to provide general information, or it's designed to deliver tailored conclusions about a specific user's specific situation. Once you cross that line by design, a disclaimer won't save you.

My product, at the AI assistant layer, was landing on that boundary at nearly every touchpoint.

How common is this trap? Look at who's already fallen in.

DoNotPay, which called itself "the world's first robot lawyer," faced a California class action in 2024 and a $193,000 FTC fine in 2025, with a permanent ban on "robot lawyer" marketing. The root problem wasn't even the product's functionality. It was the marketing: "the world's first robot lawyer." That name turned every feature into evidence of UPL.

In March 2026, Nippon Life Insurance Co. sued OpenAI. A party in a case used ChatGPT to analyze her attorney's letters. ChatGPT told her she was "being gaslighted." She fired her attorney, used ChatGPT as "co-counsel," and filed 21 motions in an already-closed case. The plaintiff is seeking $10M in punitive damages.

It's Not Just Law

After talking this through with friends, I realized the pattern exists across every licensed profession:

Law: you can't replace an attorney's advice, but you can do education. Accounting: building an accounting tool for small businesses is fine; providing accounting services is not. Healthcare: a health information app is legal; giving diagnostic recommendations based on symptoms crosses into practicing medicine without a license.

The pattern is consistent: the license requirements of regulated professions are the natural barrier for AI products. AI is fully capable of doing this work at the knowledge level, but a license is a statutory monopoly. It has nothing to do with AI's capabilities.

So how did TurboTax pull it off? Because Congress specifically created an unregulated lane for tax preparation that doesn't require a CPA license. The IRS allows anyone to act as a tax preparer and help people prepare their returns. Software generates the forms, users submit them. In the legal domain, no equivalent statutory exception exists.

Navigating Risk from First Principles

This adds a constraint at the very top of the product design stack.

If you're a founder or independent developer, before touching any product that involves a regulated profession, ask yourself one question:

Is your product replacing the work of someone who needs a license?

If the answer is yes, you have three paths.

Path one: shift from advice to tool or education. You can't tell users "here's what you should do," but you can teach them "here's what the law says" and give them tools to make their own judgment. LegalZoom's entire business model is built on this positioning: standardized legal document templates, guided questionnaires, and "We are not a law firm" on every page. That line isn't a footnote. It's the structural support for the product's legality. In 2015, LegalZoom won a UPL challenge from the North Carolina Bar, establishing a legal precedent for self-help legal tools. Our product took this path. The core principle in eight words: educate, don't adjudicate; guide, don't substitute.

Path two: partner with a license holder. The product itself doesn't make professional judgments, but a licensed professional backs the content. Our approach is an attorney engagement pipeline: attorneys review all content and sample AI responses before launch, sign off, and conduct monthly compliance audits after launch.

Path three: get the license. If you genuinely want to offer legal advice as a service, build an entity that includes attorneys. The hardest path, but the most complete one.

The core takeaway: compliance isn't about adding a disclaimer before you launch. It's a Day 0 product design decision. A disclaimer doesn't change what your product fundamentally is. Your product is what it's designed to do.

A Final Thought

Many of the traps in product design are technical: wrong architecture, performance that doesn't hold up, UX that doesn't work. Those traps can be fixed.

Regulatory traps are different. They invalidate the product's right to exist. If you don't think this through at the design stage and only discover the core feature has a legal problem after the product is built, the question isn't "how do we fix it." It's "do we keep going at all."

My lesson: if I hadn't run a systematic review with gstack, I probably would have launched with all of these legal risks baked in. The value of a tool like this isn't just helping you write code. It's finding the things you don't know you don't know before you go all-in.

Compliance boundaries aren't obstacles to innovation. They're design constraints. Like the laws of physics for architecture, the license requirements of regulated professions are your "physics." There's still enormous space to build valuable products within those constraints. The key is knowing where the line is before you draw your first sketch.