Imagine this: a person going through a difficult divorce is sitting at the kitchen table and entering their circumstances into ChatGPT late on a Tuesday night in a quiet home. They wish to comprehend their options, consider potential arguments from their spouse, and perhaps gain insight into the perspective of a judge. It has the feel of research. It seems intimate. In a hazy way, it feels like self-defense. According to a February 2026 federal court decision, what it really is could be a gift to the opposing side’s lawyer.
United States v. Heppner, which was decided by the U.S. District Court for the Southern District of New York, is the case that made all the difference. Bradley Heppner, the defendant, had hired a real attorney with a bar card, but he had also been using Claude, an AI platform, to evaluate the case’s facts and produce memos with a strategy focus. He spoke with his lawyer about some of the things he fed Claude. Later on, he maintained that those materials ought to be safeguarded. No, the court ruled. In a clear and unambiguous manner. The logic was simple: since AI platforms are third parties, disclosing information to them violates the confidentiality required by privilege, and no amount of further legal involvement can restore that.
| Category | Details |
|---|---|
| Landmark Case | United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) |
| Court | U.S. District Court for the Southern District of New York |
| What the Court Ruled | Conversations with public AI platforms (ChatGPT, Claude, Gemini, etc.) are NOT protected by attorney-client privilege or work product doctrine |
| Key Defendant | Bradley Heppner — used Claude AI to analyze his case and create strategy memos without his attorney’s knowledge |
| Why Privilege Was Lost | AI platforms are considered third parties; sharing information with them constitutes a waiver of confidentiality under established legal principles |
| Does Paying for AI Help? | No — even paid subscriptions (e.g., $25/month Claude or ChatGPT Plus) do not restore privilege if the platform’s terms allow data collection or sharing |
| Divorce-Specific Risk | AI chats about finances, hidden assets, custody strategy, or attorney advice can become discoverable evidence in divorce proceedings |
| Advisory Source | Ward and Smith, P.A. — April 1, 2026 client advisory urging all family law clients to stop using public AI tools for case-related matters |
| Safe Alternative | Attorney-directed use of closed, internal AI platforms — where counsel controls the workflow and confidentiality is maintained |
| Simple Rule | “If you would not want it read in court, do not enter it into an AI tool.” |
Family law attorneys have been observing this with a mixture of quiet alarm and grim satisfaction. satisfaction, as it validates what many of them have been casually telling clients for months. Concern, as the number of people utilizing AI to handle their divorces—without any legal advice at all—has increased more quickly than most professionals in the field had anticipated. Divorce is a sensitive subject. At three in the morning, people reach for whatever seems accessible, and when everything else seems chaotic, a chatbot that replies with composed, well-organized paragraphs is truly appealing. The issue is that the chatbot is a stranger in legal terms as well.
The Heppner ruling is especially broad because it extends beyond criminal cases. Its reasoning naturally extends to civil litigation, such as that which occurs in family courts nationwide on a daily basis, such as equitable distribution hearings, custody disputes, and alimony proceedings. Discovery is already extensive in these situations. Courts are used to going through bank statements, texts, and emails. Opposing counsel is beginning to inquire about AI chat logs, which are merely the next category. AI-related questions are now included in depositions. It is no longer hypothetical to request “all communications with AI-based tools, including prompts, inputs, and outputs.” It is taking place.

The way this is unfolding is particularly ironic. Due in part to the high cost of legal counsel and the perception that you must fully comprehend the divorce process before you can trust anyone, people turn to AI. That instinct isn’t entirely incorrect. However, the court made it clear that disclosing confidential information on a public platform, such as your financial records, legal strategy, or attorney’s advice, does not automatically result in the information being compromised. It actively removes the preexisting protection. No matter where information travels, privilege does not follow. Confidentiality is lost the instant it enters a system whose terms of service permit data collection or disclosure to outside parties. It doesn’t get better if you later share it with your attorney.
Given how many common people it directly impacts, it’s difficult to ignore the fact that this decision was made without much public notice. The majority of divorcing individuals are not considering the work product doctrine or the Southern District of New York. They are considering their next move, their mortgage, and their children. The legal community is still figuring out how broadly this decision will be applied and whether the boundaries will become clearer in subsequent cases. The practical rule that family law firms have begun posting on their websites, which is worth reading slowly, is sufficiently obvious at this point: don’t type anything into a chatbot if you wouldn’t want it read aloud in court. Completely stop.
