Main content

Luminance CEO – using AI to automate the law one task at a time

Chris Middleton Profile picture for user cmiddleton April 24, 2024
Summary:
A UK AI provider is focusing on automating parts of the legal space. But is it a smart contract or a Faustian one? We hear from the CEO.

Animated image of someone accessing legal information on their phone
(Image by mohamed Hassan from Pixabay )

If you can describe what you do for a living, then your job can - and will - be automated. Words uttered by computational neuroscience PhD Anders Sandberg back in 2016, when he was Senior Research Fellow at Oxford University’s Future of Humanity Institute. 

Speaking at a robotics and AI seminar at the Japanese embassy in London that year, Sandberg’s thesis was that as more and more jobs become rules based, or can be broken down into sets of repeatable tasks, they become more and more susceptible to machine automation. 

Did futurist Sandberg predict that his Institute would close last week (on 16 April)? I don’t know. But here in the AI Spring of 2024, there is every sign that he was right about automation, as rising numbers of tasks are handed to machine ‘intelligences’ - or, at least, to models trained on scraped human knowledge.

And some areas are more suited to this than others. Take legal services.

Despite its nuances and local differences, the law is a set of rules - albeit one that evolves via precedent and landmark judgements. So, it is hardly surprising that legal services - along with other rules-based professional markets, such as accounting, auditing, banking, and finance - are prime candidates for AI.

While we are not quite at the stage of robo-judges and juries, some aspects of law work have already been handed to AI. It has not always gone well - witness the hapless Manhattan lawyer whose use of ChatGPT last June saw him unwittingly present fake case law in court. But other examples have been more positive. 

For example, AI has been used in a murder trial at the Old Bailey criminal court in London. And in November last year, an AI negotiated a legal contract with another AI, with no human in the loop - at least, until the contract was signed. 

The AI in the latter case was a proprietary co-pilot and cloud-based Large Language Model developed by UK start-up, Luminance. The company says its “legal-grade LLM” can both read and form a conceptual understanding of documents, labelling key information and highlighting areas of risk or opportunity for its human masters.

In this way, Luminance aims to help busy law firms and legal departments understand important documents quickly - it illuminates them, hence the name.

Despite being founded as recently as 2015 (by Cambridge machine learning experts), Luminance is already trusted by over 600 organizations worldwide and has offices in Cambridge, London, New York, and Singapore. Earlier this month, it raised $40 million in Series B funding. All impressive achievements.

Eleanor Lightbody (nominative determinism strikes again?) is Luminance’s CEO. She joined three and a half years ago, having previously been Director of AI at cybersecurity company, Darktrace. She tells me that her belief in adopting AI in legal services has a personal root:

I grew up in a household full of lawyers. I would try and get into my Dad's office and, I kid you not, there were boxes and boxes of stacked documents. You’d open the door, and you almost couldn't get in because there was just so much paperwork. 

Then, obviously, everything moved online, but he would still face the same problems. He would spend hours and hours with his team going through a contract just trying to find the key information. At those times, we would hardly see him!

So, that is a problem that AI can definitely address: being able to read and understand contracts at a much faster rate, and pick out subtleties and other things of interest.

But on the other hand, I built my career in sales, and so, obviously, interacted with the legal team there. I would call them up and say, ‘Where's this NDA?’ Or ‘Where's this contract I want to get signed today?’ But there are only so many hours in a day, which is the other problem to address. 

The fact is legal teams are resource limited. So, how can AI help automate and augment a lot of that work that they're doing for different departments on a daily basis? Contracts are very manual, very time consuming, very expensive, and can introduce a lot of risk. So, we were the first company to begin to automate that process from end to end.

Putting her sales hat back on for a moment, she adds:

Luminance is AI for wherever businesses interact with their contracts. It's AI to help them review and negotiate contracts faster, automating that process. It will tell you the nuances of all of your contracts. It will tell you which parts of the contract you can agree to, and the ones you can’t. Plus, for anyone who wants to create a contract, it will allow them to do that in seconds.

OK. So, if Luminance has developed a proprietary LLM, trained on different types of contracts and the rules that surround them, does Lightbody foresee a time when her company might automate other areas of the law? After all, much of the legal realm and its procedures is founded on precedent, judgments, and case law: all potential grist for the AI mill. It’s those automatable rules once again.

She says:

Our customers tell us on a daily basis that they're using this beyond just legal contracts, they're using it for letters of intent, scope work, SLAs... So, we already know that use cases have broadened out from contracts. 

So, for me the question is, if our AI can conceptually understand text, then what are the other use cases that would benefit problems that businesses face today? 

Some of those might be looking at a compliance module. If AI can help surface key information instantaneously, then what if we take that one step further? What if there's a new regulation and the contract can alert you if it needs updating? I'm really interested in this idea.

Likewise, let's think about how we can start automating the process of invoices. So yeah, because we are an AI company, we can expand into lots of different use cases. 

But at the same time, we want to remain focused. We want to provide a return on investment to companies - in any industry and any country [and in any language, she adds]. And really, to organizations of any size who are looking to get through their legal contracts faster and identify hidden opportunities or risks.

Devil not in the details shock

It used to be said that the devil is in the details. But when it comes to AI, that seems to be where the opportunity is. The devil is really in the bigger picture, especially in legal services. That’s because reading and interpreting complex documents - identifying opportunities, risks, and nuances within them - is what expert lawyers used to do.

Arguably, automating that process risks time-poor professionals becoming disconnected from critical detail at first hand, and coming to rely on AIs to explain things and do the hard work for them. 

Remember how quickly that Manhattan lawyer trusted ChatGPT to present him with relevant case law, abdicating professional responsibility barely six months into the AI Spring? It took a judge to spot the AI’s hallucination - in court. And consider all those students and school pupils who are already relying on LLMs for their basic coursework. 

Common sense suggests that these ‘disconnection’ problems - automated laziness, perhaps - will become more and more widespread. Until AIs become so well informed that lawyers themselves become irrelevant, perhaps; or merely the PR arms of machine processes. Just load your case into the judge-o-tron and hit ‘generate’! That would be great for clients’ legal costs and lowering the barriers to accessing legal services, of course - a win in many ways.

But as has been discussed in diginomica this year, the concomitant risk is that senior professionals pull up the ladder to success behind them, as the gulf between a newly qualified junior and an expert informed by a lifetime of first-hand experience gets wider. Does Lightbody agree?

She says:

I think - slightly - that it allows juniors to learn faster. And it enables them to pull on information they might not have had, historically. I think it can accelerate their skill sets and expertise. 

So, some might feel that the ladder has been pulled up. But especially with senior lawyers or those who have been practising for a very long time, they're normally so busy that it's very hard for them to impart the knowledge they've gained over 10 or 20 years. 

So, if you could use technology to ‘institutionalize’ that knowledge, it would allow someone who is relatively new to their career to call on that. They would get more information, and so get to grips with things sooner, and be trained faster and more efficiently. 

I think machines will definitely take work off humans’ plates, but a lot of it will allow for better learning and a better experience.

Good answer. But out there in the cutthroat market, these vertical and horizontal opportunities will not be invisible to those US Big Techs who are either developing or backing LLMs and cloud-based generative AIs. 

So, how important is the niche, professional focus of a company like Luminance, with its trusted data sets? And how big a role might they have in educating their professional clients not to paste privileged data into generalist public-cloud tools? She says:

For us, it’s really important that, when you're working in the legal sphere, you have built very specialized domain expertise, a very deep vertical language model.

Cloud-based generative AI is great, but it’s like going to a dinner party where you sit next to someone who is good company, and you have an amazing conversation. But when you get home, you realise that a lot of the information they gave you just isn’t right!

But in the legal world, you need an AI that either tells you that something is legally and factually correct, or that it doesn’t know the answer.

Even so, it can’t be long before Big Techs that have the buying power of a top-10 economy buy up all of the world’s specialist domains in AI terms, and we become more and more reliant on a handful of providers for all our critical information - creating an economic black hole, a massive centre of gravity. Does Lightbody agree? 

I think that's an unbelievably good question. I think, mostly, that generalist models will become commoditized. And that's the race at the moment, isn’t it? Between Microsoft, Google, and so on. 

Then, personally, I believe that what we'll see is, as they become commoditized, more and more specialist companies will come out and build on top of them with their own domain expertise. And it's that domain expertise that will set smaller companies apart.

My take

A UK success story with a canny leader who has deep experience in the field: good news, in general terms. But the wider issues emerging in these spaces need more careful consideration: not in a company, perhaps, but in society as a whole.

Loading
A grey colored placeholder image