Legal AI slop is becoming a real problem
ctinsider.com>Lawyers for the Brooklyn, N.Y.-based landlord submitted a brief to a lower court containing "hallucinatory" citations created by generative AI
oof. I really can't believe this. Well I can but don't want to
Well Dusk Dozer, I got some bad news for you: https://www.damiencharlotin.com/hallucinations/
Great link, thanks.
> While seeking to be exhaustive (982 cases identified so far),
Not sure what I was expecting, but that number is too high.
I had a legal appointment recently to update my Will, Medical Directive, etc. The lawyer had gemini opened up on the left half of her screen and the legal docs on the right, which did not instill confidence. Every time she showed lack of confidence in an answer to a question of mine, I was extra paranoid, although I tried to make sure I wasn't going to discount her strictly for having the LLM open, as she didn't use it during our appointment, to my recollection. Nevertheless while I usually check and update these forms on a 5yr basis, I plan on doing the next one much sooner in because that appointment did not give me the assurance I wanted.
I'm glad it wasn't for anything pressing or in support of a lawsuit
I recently filed a lawsuit in federal court, but because of the nature of the suit (adversarial proceeding on a bankruptcy case, wanting to cut my losses knowing collection is going to be the problem) I decided to do it Pro Se.
I've used a lot of AI to do this, with a lot of research of my own, reading documents from similar cases, verifying citations, etc. So far, things are going well, I've won on all the motions so far. But I'm using critical thinking and carefully reviewing everything.
The real failure with slop filings is procedural, not technological. A competent attorney should never submit a brief built on case law they hadn’t verified. Legal practice has always relied on reading the sources, confirming relevance, and taking responsibility for interpretation.
> "Unfortunately," they wrote, "Counsel did not notice that AI had intuitively made changes to the brief prior to filing."
What does intuitively made changes mean here?
It means "it's not our fault, it's the AI's, and if the judge is not too tech-savvy, he'll probably buy that".
First, the lawyers will discover that AI is inherently unreliable.
And then they will monetize educating the rest of the world to this fact.
In other words, applying current AI to anything "important" is a liability issue waiting to happen.
s/Legal // ftfy