Main content

Why AI has hit peak hype for cautious lawyers

George Lawton Profile picture for user George Lawton June 25, 2024
AI for legal software was all the rage at the LegalTechTalk Conference. Meanwhile, lawyers are cautiously kicking the tires.


At the recent LegalTechTalk conference in London, every vendor was touting new AI capabilities. These included generative AI tools that could assist in finding relevant cases, summarizing complex documents, understanding intricate contracts, and enhancing eDiscovery processes.

However, the lawyers present seem to be taking a more measured approach owing to concerns about hallucinations, accuracy, governance, and trust. For example, none of the lawyers on a panel about legal innovation mentioned AI once. A bit more was said at a panel on AI for lawyers, but in an exploratory, cautious, and wish-list kind of way.

For example, Christopher Tart-Roberts, Chief Knowledge & Innovation Officer at law firm Macfarlanes, talked about using AI as a second pair of eyes to give him feedback on his own writing. Alistair Wye, Lead Innovation Technology Solutions Attorney at Latham Watkins, imagined how academic tools for organizing relevant citations into a grid might be adapted to legal use cases for reviewing, synthesizing, analyzing and comparing sources.

Minesh Tana, AI Lead at Simmons and Simmons, believes that some of the new legal gen AI tools are helpful for making sense when exploring complex legal domains adjacent to his current expertise. They are also good at distilling longer submissions into shorter ones, which are easy for an expert to double-check. But he notes that accuracy and hallucinations are still big issues that need to be considered, stating:

We still have issues around hallucinations and accuracy, but it can give you a very helpful and expedited starting point.

Conversing about risks

Tart-Roberts says they are starting to have more conversations with clients about what the use of generative AI tools means for productivity and business risks. Some clients are concerned owing to some of the alarmist articles out there. He notes:

I think it's also about having good conversations with your clients, so they understand what's happening to their data.

There was a range of opinions about the level of depth and detail to go into with clients. At one end, Wye suggested that taking the time to really understand and communicate about the underlying technology and how various approaches shape risks can help improve collaboration and adoption in the long run.

At the other end of the spectrum, Tart-Roberts suggests that spending too much time teaching lawyers the nuance of information, security and contractual implications can come at the expense of exploring new opportunities, arguing:

Sometimes those conversations can hold back the conversations that you actually wanted to be having - the exploratory conversations for actually getting people to use the technology and figuring out where it can deliver value in the context of their day-to-day work.

Transformation requires better integration.

One gripe about the current AI tools and capabilities for lawyers has been limited integration with each other and across data sources. This is critical because lawyers often have to distill and synthesize information from many sources in making an argument or explaining a complex concept to a client.

Tart-Roberts explains that better integration will be required to drive digital transformation in the legal industry:

Right now, I think the risks that we are running in the way that AI is being incorporated into different products is that we're having silos of AI that are tapping into single data sources. So, if I'm the lawyer, I'm helped because I go to each of those data sources individually, and I get a more sophisticated output than I might have gotten before… But I'll need to get one from each of those places because, at the moment they aren’t talking to each other. 

This means that at the end of it, I've still got five or eight inputs to distill and synthesize myself or to take advantage of another AI to bring more together. And that's not pushing us very far. We are not going to get the real transformation. I think the technology could facilitate transformation unless we figure out a way of AI aggregating all of those data sources and bringing them all together so that our lawyers can do their jobs in a way that is not artificial, which actually fits with the way that we work.”

Governance essential

Legal technology and AI vendors also face considerable hurdles in building trust with lawyers and regulators who tend to be a skeptical lot. The room erupted into laughter when Tana noted he was a grumpy litigator from Yorkshire, so he is a natural pessimist. But on a more serious note, he continues:

I do think we're missing governance as a huge area. We can debate the powers of AI. We are dealing with a form of technology that can cause an existential threat to society. We're dealing with an industry where truth and accuracy are paramount. If you get it wrong in the criminal justice system, for example, someone goes to prison when they shouldn't go to prison. So, when you put those two things together, it's absolutely vital that we're deploying this technology safely and responsibly.

Tana points to the example of the airline industry, which has solid regulations and a track record, which engender trust in flying, while AI has neither. He cautions:

With AI, we've got this thing that's got the capacity to cause harm. We have no track record of safety, really. And we have no regulation, and that is sort of still happening now.

My take

It seems like the first loud cautionary tale of AI started when a US lawyer was fined and had their case tossed after submitting a ChatGPT written brief with hallucinated case citations. But that’s just the tip of the iceberg in terms of how AI could undermine justice when poorly implemented. One big challenge in all this is that the problems may not even show up the day AI is rolled off the assembly line but after a slow process of erosion.

Tana’s perspective on airline safety is cautionary in more ways than one. It is true that the airline industry has one of the best track records of any industry. It's also true that this track record has been called into question due to a rising series of safety failures that have come to a head over the last couple of years and continue to be investigated.

Getting AI safety right in the legal industry may require an ongoing and pragmatic process of learning and inquiry to prevent a similar slide of safety.

A grey colored placeholder image