United States v. Heppner  - Harvard Law Review

16 min read Original article ↗

Fitting new technology to old doctrine is a perennial challenge for courts. Today, that technology is generative artificial intelligence (AI), and those doctrines now include attorney-client privilege and the work product doctrine. Recently, in United States v. Heppner, Judge Rakoff of the Southern District of New York — addressing “a question of first impression nationwide” — ruled that written exchanges between a criminal defendant and generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine. As written, the court’s opinion veers toward categorically excluding a client’s use of generative AI from attorney-client privilege. A more fact-dependent analysis — and careful consideration of the role of AI within the attorney-client relationship — would suggest that such use should at least sometimes qualify for privilege.1

On October 28, 2025, a grand jury returned an indictment charging Bradley Heppner with fraud and false-statement offenses. In early November, Heppner was arrested. FBI agents executed a search warrant at his home and seized electronic devices containing approximately thirty-one documents (“AI documents”) of exchanges between Heppner and Claude, a generative AI assistant developed by Anthropic.

According to Heppner’s counsel, after Heppner “received a grand jury subpoena” and “it was clear . . . that [he] was the target of this investigation,” he used Claude “in anticipation of a potential indictment.” Without direction from counsel, Heppner input information he had learned from counsel into Claude and “prepared reports that outlined defense strategy [and] what he might argue with respect to the facts and the law.” He shared Claude’s outputs with counsel. Those outputs, in turn, influenced counsel’s strategy “going forward.” As Heppner’s counsel stated, Heppner had used Claude for the “express purpose of talking to counsel.”

Heppner asserted privilege over the AI documents. On February 6, 2026, the Government moved for a ruling that the AI documents were not protected by attorney-client privilege or the work product doctrine. After oral argument on February 10, the court granted the Government’s motion. On February 17, Judge Rakoff issued his opinion.

Judge Rakoff decided that the AI documents were unprotected by attorney-client privilege. He found that the AI documents lacked the first two, “if not all three,” of the privilege’s required elements. First, the communications were not “between a client and his or her attorney.” “Because Claude is not an attorney,” Judge Rakoff wrote, “that alone disposes of Heppner’s claim of privilege.” 

Second, the AI documents were neither “intended to be,” nor were they “in fact,” “kept confidential.” Heppner had “communicated with a third-party AI platform.” Also, according to the court, Claude users consent to Anthropic’s privacy policy, which “provides that Anthropic collects data on both users’ ‘inputs’ and Claude’s ‘outputs,’ that it uses such data to ‘train’ Claude, and that Anthropic reserves the right to disclose such data to a host of ‘third parties,’ including ‘governmental regulatory authorities’ . . . [or] in connection with claims, disputes[,] or litigation.” Therefore, “Heppner could have had no ‘reasonable expectation of confidentiality in his communications’ with Claude.”

Third, Heppner did not prepare the AI documents “for the purpose of obtaining legal advice.” Even though Heppner’s counsel asserted that Heppner had used Claude for the “express purpose of talking to counsel,” Heppner did not use Claude at the direction of counsel. According to the court, the question was thus “whether Heppner intended to obtain legal advice from Claude.” And Heppner could not have so intended. The court reported that when the Government asked Claude to provide legal advice, Claude responded that it could not. According to the court, that Heppner intended to share — and in fact did share — his Claude exchanges with counsel did not matter. It could have been a different story if counsel had directed Heppner to use Claude. Then, Claude might have functioned as a lawyer’s highly trained agent, covered by attorney-client privilege under the Kovel doctrine.

Judge Rakoff also decided that the AI documents were unprotected by the work product doctrine: Even if the AI documents were prepared “in anticipation of litigation,” they were neither “prepared by or at the behest of counsel” nor reflective of counsel’s strategy. 

The conclusion that the AI documents fell outside attorney-client privilege was not as inevitable as Judge Rakoff’s opinion might suggest. The court’s reasoning effectively forecloses the possibility that a client’s self-directed use of AI could satisfy the privilege test. But a more fact-intensive analysis — coupled with an intentional account of AI’s role in the attorney-client relationship — would indicate that such use should be privileged in at least some circumstances. After all, the existence of attorney-client privilege should be “determined on a case-by-case basis.”

1. Communications between a client and his attorney.

“Because Claude is not an attorney,” Judge Rakoff wrote, “that alone disposes of Heppner’s claim of privilege.” 

That sentence, however, oversimplifies how courts in practice treat client communications with non-attorney third parties in the context of attorney-client relationships. If the non-attorney in question is a human, then it is generally true that such communication is not privileged.2 If Heppner had confided in a friend about his case, that conversation would not be privileged. Even here, though, there are exceptions to the general rule. Communications among multiple clients who share a “common interest” may be privileged. Judge Rakoff himself mentioned another exception later in the opinion: Communications between a client and an attorney’s non-attorney agent, such as an interpreter or accountant, may also be privileged under the Kovel doctrine. 

But if the non-attorney in question is a tool that a client uses as part of her workflow with counsel, courts seem to treat that tool’s non-attorney status as immaterial to the privilege analysis. When clients use Google Docs to create documents,3 Google Slides to create presentations,4 Gmail to send emails,5 and iCloud to store materials6 to communicate with counsel, courts seemingly do not ask whether the use of those intermediary tools defeats claims of privilege.7 Why, then, did Heppner’s use of Claude to generate material later shared with his attorney automatically defeat his claim of privilege?

The Heppner court assumed sub silentio that Claude was more like a non-attorney human than a tool. One might reasonably question that assumption. On the very same day of Judge Rakoff’s oral decision, the district court for the Eastern District of Michigan (in a civil case concerning work-product protection for a pro se litigant’s ChatGPT-generated materials) emphasized that “ChatGPT (and other generative AI programs) are tools, not persons” and represent “a litigant’s internal mental impressions reformatted though software.”

Admittedly, treating Claude like a tool akin to Google Docs risks understating the radically transformative power of AI. But ultimately, treating generative AI as a tool would produce decisions more consistent with how courts have treated clients’ use of other now-ubiquitous technologies like Gmail or Google Docs. And it would clarify that a meaningful distinction remains between human and machine. 

More fundamentally, the question of whether Claude is an “attorney” is an unhelpful one. In Felix Cohen’s terms, it awards the designation of “attorney” a “supernatural” status, inviting “mechanical jurisprudence” “divorce[d] . . . from questions of social fact and ethical value.” 

At this moment in the story of AI, the better question may be: What kinds of future relationships among clients, attorneys, and AI should we seek to cultivate? Attorney-client privilege, “the oldest of the privileges for confidential communications known to the common law,” exists “to encourage full and frank communication between attorneys and their clients.” It promotes collaboration between attorney and client in pursuit of the most “effective representation” possible, which “depends upon the lawyer’s being fully informed by the client.” We might understand this privilege as promoting the fair and efficient use of resources — both human and technological — within the attorney-client relationship.

Judge Rakoff’s opinion suggested that these AI documents could have been privileged only if counsel had directed Heppner to use Claude. That view would restrict attorney-client privilege to exchanges with large language models (LLMs) that are directly traceable to an attorney. And that rule would cultivate an unattractive relationship among clients, attorneys, and AI, for at least two reasons. First, it would asymmetrically disempower clients and reduce their ability to collaborate as equals with their attorneys in shaping representation.8 Second, it would invite performative adherence to formalities. Attorneys could simply have clients sign boilerplate at the outset declaring that any future use of AI is undertaken at counsel’s direction — an obviously fictional exercise.

2. Confidentiality.

The confidentiality analysis should have taken the form of a fact-dependent inquiry centered on a client’s “reasonable expectation of confidentiality.” The court’s three reasons for finding a lack of confidentiality are unpersuasive.

First, Judge Rakoff wrote that the AI documents were not confidential because Heppner “communicated with a third-party AI platform.” If that is shorthand for the proposition that the AI documents were not confidential because Heppner shared them with — and thereby granted access to — Anthropic, the explanation is unsatisfying. Consider other common software-as-a-service (SaaS) tools. When a client writes emails to her attorney using Gmail, she communicates through a third party: Google. Google generally can, as a technical matter, access those emails9 — and has in fact produced users’ emails in response to search warrants.10 When a client writes messages to her attorney using Slack, Slack likewise generally can access those messages and has also produced them in response to search warrants.11 The same can be true when a client stores materials in Apple’s iCloud.12 Yet courts do not treat communication with these third-party platforms as vitiating a client’s reasonable expectation of confidentiality.13 It would be anomalous to treat generative AI systems differently.14

Second, Judge Rakoff emphasized that Claude users consent to Anthropic’s privacy policy, which “provides that Anthropic collects data on both users’ ‘inputs’ and Claude’s ‘outputs’ . . . to ‘train’ Claude.” However, that explanation does not account for the fact that even free, consumer-tier Claude users can opt out, in a variety of ways, of having their data collected for training.15 If a particular user does opt out, would that not eliminate this particular concern? Here, the pleadings do not make clear which privacy settings Heppner had selected for himself. Nor did the court’s opinion elaborate. Instead, the court seemed to assume that Heppner had in fact opted into training-related data collection. 

Even if Heppner had opted in, it is still not obvious that doing so should have defeated his reasonable expectation of confidentiality. Anthropic, at least, “automatically de-link[s]” data from a user’s ID before using it. And generally, “[m]odels do not store text like a database.” “Instead, as a model learns, the values of its parameters are adjusted slightly to reflect patterns it has identified.” Once trained, models “do not have access to or pull from the original training data.” Therefore, the fact that an AI platform collected a user’s data for training does not necessarily mean that the finished model will preserve the user’s conversations as human-readable text.16

Third, Judge Rakoff noted that “Anthropic reserves the right to disclose [user] data to a host of ‘third parties,’” which “puts Claude’s users on notice that Anthropic, even in the absence of a subpoena compelling it to do so, may ‘disclose personal data to third parties in connection with claims, disputes[,] or litigation.’” But that aspect of Anthropic’s privacy policy is also an unconvincing basis for finding a lack of confidentiality, for several reasons. For one, that sort of policy is not unusual among other SaaS providers. Google’s and Slack’s are similar.17

Also, it may seem odd that the existence of attorney-client privilege — especially in a setting as high-stakes as criminal defense — would turn on something as variable and elusive as a private AI company’s privacy policy. Over the last several years, both Anthropic and OpenAI have repeatedly updated their privacy policies — in Anthropic’s case, often every few months. And “[s]ocial science research reveals that consumers do not read or understand privacy policies,” anyway. The court’s assertion that Anthropic’s “policy clearly puts Claude’s users on notice” therefore may be overstated. 

The touchstone here should remain the user’s reasonable expectation of confidentiality — not an AI company’s technical or contractual ability to access user data. 

How courts have approached “reasonable expectations of privacy” under the Fourth Amendment may provide helpful guidance. In Katz v. United States,18 Justice Harlan in concurrence articulated a “twofold requirement” for Fourth Amendment protection: “first that a person have an actual (subjective) expectation of privacy and, second, that the expectation be one that society is prepared to recognize as ‘reasonable.’” In the context of AI and attorney-client privilege, if a free, consumer-tier Claude user takes all possible precautions by opting out of data collection and retention at every turn, an expectation of confidentiality may well be reasonable. 

Furthermore, the Court’s holding in Kyllo v. United States19 supports the idea that even if some uncommon device could access otherwise “unknowable” information, that should not diminish reasonable expectations of privacy. In the context of AI and attorney-client privilege, courts should untether reasonable expectations from what is technically possible in the world. That is, a user’s reasonable expectation of confidentiality should not be defeated solely because an AI company could, in theory, access her data. 

3. Purpose.

Judge Rakoff framed the third and final element of privilege as “whether Heppner intended to obtain legal advice from Claude.” But what the court should have asked instead was whether Heppner intended to use Claude to facilitate obtaining legal advice from his attorney.

The Heppner court focused on the fact that “Heppner did not [use Claude] at the suggestion or direction of counsel.” However, courts have long recognized that a client’s self-directed notetaking may still be privileged when undertaken to facilitate legal advice from counsel.20In such cases, courts consider facts such as whether the notes “list any questions requiring the legal advice of counsel”;21 form “an agenda or set of reminders” to discuss with counsel;22 recount “events and conditions” the client thinks her counsel “need[s] to know”;23 and were in fact communicated with counsel.24 

This line of notetaking cases makes clear that advice from counsel is the end-goal — not advice from the intermediary used to communicate with counsel (i.e., a client’s own notes). And in a case like Heppner, asking whether a client used Claude to facilitate obtaining legal advice from counsel would properly situate AI as a tool within the attorney-client workflow. If privilege doctrine is to reflect the kinds of relationships among clients, attorneys, and AI that we aim to cultivate moving forward, this framing would avoid relying on a muddled conception of Claude as an anthropomorphized third party — and would keep the relationship between attorney and client in focus.

The court’s articulation of the purpose test, by contrast, would seem to suggest that the closer a client’s use of AI comes to substituting fully for an attorney’s judgment, the more likely it is to be protected by privilege. Then, the court’s application of its own test would seem to categorically preclude clients’ use of today’s frontier AI models. To conclude that Heppner did not intend to obtain legal advice from Claude, Judge Rakoff pointed to the fact that “when the Government asked Claude whether it could give legal advice, it responded that ‘I’m not a lawyer and can’t provide formal legal advice or recommendations.’” But that response proves little about any given user’s intent. LLMs are designed to disclaim providing legal advice, likely to minimize unauthorized practice of law (UPL) liability risk

If the court had asked instead whether Heppner used Claude to facilitate obtaining legal advice from counsel, the answer might have been yes: His “express purpose” was “talking to counsel.” More broadly, though, should a client’s self-directed use of AI be privileged in the same way as a client’s self-directed notetaking? 

Today, AI is different in kind, and not just degree, from traditional tools for client notetaking and message transmission. Of course, a client could use AI simply to “reformat[]” questions for her attorney into bullet points, much like taking notes in Google Docs. But AI is capable of — and is being used for — far more. It can generate substantive reactions and predictions in a manner resembling professional judgment. A client can ask AI to give feedback on questions to ask her attorney, or (as Heppner appeared to have done with Claude) brainstorm legal strategies to bring to her attorney. That kind of dynamism and interactivity in a technology is thus far unprecedented.25

What will it mean, then, to use AI for the purpose of obtaining legal advice from counsel? When technology outpaces analogy, courts should pause to remember why attorney-client privilege exists at all: to create a zone of “full and frank communication” between attorney and client, and to support confidential pressure-testing that enables the most “effective representation” possible. We now have a new technology that could radically improve a client’s ability to communicate with her attorney and participate in the attorney-client relationship as an informed and prepared collaborator. It would be contrary to the privilege’s function to foreclose clients from using it.

 Conclusion

There is a serious interest on the other side of the equation: the principle that “all relevant proof is essential” for “the fair administration of justice.” One might question whether applying privilege to clients’ use of AI would dramatically expand the zone of protected communications, to the detriment of the administration of justice. Functional, fact-dependent inquiry, however, should bound the size of that zone, just as it has for technological tools that have come before.26 The alternative — categorically excluding clients’ use of AI from privilege — would be unsustainable. AI “has become part of everyday life for many Americans,” not only through standalone tools such as Claude and ChatGPT but also through its creeping integration into familiar tools such as Gmail and Microsoft Word.

District courts across the country are now charting new territory at the intersections of AI and copyright, First Amendment, and privacy law. Heppner offers the first word on whether a client’s use of generative AI within an attorney-client relationship can qualify for privilege. Future courts confronting similar questions should resist the opinion’s categorical tilt, attend carefully to the facts of each case, and consider how privilege can operate to promote effective collaboration between clients and attorneys in this age of AI.