AI experts expect the legal profession will be revolutionized, like others, and become much more efficient. Here’s how.
With the release of Chat GPT-4 in March, artificial intelligence has vaulted into the public eye. As it grows exponentially, how will it streamline – and potentially threaten – legal jobs? AI experts and journalists explained how AI can help lawyers work more efficiently, along with what they should watch out for. [Transcript | Video]
5 takeaways:
➀ AI won’t replace your job – at least not right away.
“I don’t think it’s going to happen for a while, and I think it’s going to happen quite suddenly when it does arrive,” said Calum Chace, author of “Surviving AI.” “And until then, there’s going to be lots of jobs for people. As long as there are things that humans can do that machines can’t do, there’ll be lots of jobs.” Individual jobs may be automated, which will create wealth that can in turn create new jobs, Chace posits.
In the meantime, AI can make legal jobs easier. Last month, LexisNexis announced Lexis+ AI, a program that will allow users to ask legal questions, grounded in the database’s verifiable content. AI can help lawyers save time on legal research, document drafting and summarizations, said Jamie Buckley, chief product officer for LexisNexis Legal & Professional. In its demo, Lexis+ AI is asked to write a cease and desist letter. It produces one and is then asked to revise it to be more aggressive. It does so and highlights the places where it changed the language, such as saying “we hereby demand” and threatening “swift and aggressive legal action.”
Some lawyers have expressed concern that AI will reduce their number of billable hours because they will spend less time on administrative work, but “people will be able to do more interesting billable work because the drudge work will be done by machines,” Chace said.
AI has “an innumerable number of benefits,” said John D. Villasenor, a professor of law, electrical engineering, and public policy at UCLA. Like the internet, it comes with its risks, but “the benefits outweigh the problems.” For instance, AI has the potential to “revolutionize drug development” and improve access to education, he said.
➁ Why does it seem generative AI developed so suddenly?
We’re seeing “an explosion of GPTs,” Chace said. The most recent, Chat GPT-4, marked “a huge leap” from previous iterations. Developments in software and developments in hardware, once distinct, are now blurring together, said Marie C. Baca, tech journalist and adviser to Coastside News Group and the New Mexico Local News Fund. “They have to be developed in concert,” she said. More powerful hardware like AI-specific chips and processors, combined with improved algorithms, neural networks and language processing models have allowed for increasingly sophisticated AI.
➂ Journalists should use AI, but be careful.
Journalists often work under tight deadlines with limited resources. AI can alleviate some of that pressure, but journalists should be mindful of whether the information they feed an algorithm will stay private and whether the information they receive is accurate.
The best way to understand the potential and limits of the tools is to play with them yourself. “They’re pretty easy to use, and you can pretty quickly see some of the pitfalls with these things,” Baca said. One too-common pitfall is “hallucinations.”
➃ AI hallucinations can be mitigated but it takes investment.
Hallucinations occur when AI invents inaccurate material. When they go unchecked, hallucinations can severely damage a case, a lawyer’s or a journalist’s reputation and cause public trust to evaporate. In New York, two lawyers may face punishment for including false AI-generated legal precedents in a court filing.
However, AI developers can create protocols to reduce hallucinations. Lexis+ AI integrates its private large language model with prompt engineering and search integration for accurate results.
“It also ensures that up-to-date content is being considered,” Buckley said, noting that in pre-trained models there’s a cutoff date to what they’re trained on. “ChatGPT right out of the box is pretty incredible … but it’s just not accurate enough for real-life legal use cases.”
➄ It will be difficult to fact-check AI.
Some groups have pushed to have AI-generated images include watermarks. Microsoft agreed to augment the metadata of images to indicate whether they were AI-generated. While some images will have indicators that they are AI-generated, and lawyers and journalists can use reverse image search to verify images, there currently is no “foolproof method” to tell whether an image is AI-generated, said Jennifer Conrad, a reporter for Inc. “There really isn’t a way to verify these things at this point, and that is a real concern that I think everyone should be looking at.”
However, existing consumer protections and copyright law still apply. The EU has implemented a notification requirement when companies use AI. If the U.S. adopts similar policies, it will likely employ a state and local “patchwork,” similar to data privacy law in the U.S.
“One of our most sacred duties is to hold the powerful accountable. We know that artificial intelligence is an incredibly powerful tool … The extent to which the community of journalists in this country and other countries are going to be able to hold it accountable is another question,” said Baca. “For all of the journalists, reporters out there, it’s really important to have at least a basic understanding what it is that’s going on so that everyone, not just the national and international reporters, but the local newsrooms can make sure that they’re keeping an eye on these developments.”
This event is sponsored by RELX, a global provider of analytics tools, including LexisNexis. The National Press Foundation is solely responsible for the content.