Wakeling was particularly impressed by Harvey’s skill in translation. He is strong in the core laws, but struggles in certain niches where he is more prone to hallucinations. “We know the limits and people are very well informed about the risk of hallucinations,” he says. “Inside the firm, we’ve put a lot of effort into developing a great training program.”
Other lawyers who spoke to WIRED were cautiously optimistic about the use of artificial intelligence in their practice.
“It’s definitely very exciting and certainly indicative of some fantastic innovation happening in the legal industry,” says Sian Ashton, client transformation partner at law firm TLT. “However, this is definitely a tool in its infancy, and I wonder if it really does much more than provide precedential documents that are already available in business or through subscription services.”
Artificial intelligence will likely remain used for entry-level work, says Daniel Sereduik, a data protection lawyer based in Paris, France. “Drafting legal documents can be a very time-consuming task that AI seems to be able to understand quite well. Contracts, policies and other legal documents tend to be prescriptive, so AI’s ability to gather and synthesize information can do a lot of the heavy lifting.”
But as Allen and Overy found, the output from the AI platform requires careful analysis, he says. “Part of practicing law is understanding your client’s specific circumstances, so the outcome is rarely optimal.”
Sereduik says that while the outputs of legal AI require careful scrutiny, the inputs can be equally difficult to manage. “Data fed to AI can become part of model data and/or training data, and this is likely to breach customer and individual data protection obligations and privacy rights,” he says.
This is a particularly pressing issue in Europe, where the use of this kind of artificial intelligence could violate the principles of the European Union’s General Data Protection Regulation (GDPR), which governs how much personal data companies can collect and process.
“Can you legally use software built on this foundation [of mass data scraping]? In my opinion, this is an open question,” says data protection expert Robert Bateman.
Law firms will likely need a solid legal framework under GDPR for handing over any personal client data they control to a generative AI tool like Harvey, and contracts covering the processing of that data by third parties working with the tools artificial intelligence, says Bateman.
Wakeling says that Allen & Overy does not use personal data to deploy Harvey, and will not do so unless it is satisfied that any data will be fenced off and protected from any other use. It will be up to the company’s information security department to decide when this requirement will be met. “We’re very careful about customer data,” Wakeling says. “At this point, we’re using it as a non-personal data, non-customer data system to save time on research or drafting, or making an outline for slides, things like that.”
International law is already getting tougher when it comes to stuffing generative AI tools with personal data. Across Europe, the EU AI Act seeks to regulate the use of AI more strictly. In early February, Italy’s Data Protection Agency intervened to prevent the generative AI chatbot Replika from using users’ personal data.
But Wakeling believes that Allen & Overy can use artificial intelligence to keep customer data safe and secure while improving the way the company operates. “This will have a significant impact on productivity and efficiency,” he says. Small tasks that would otherwise take up precious minutes of a lawyer’s day can now be outsourced to AI. “When you combine that with more than 3,500 lawyers who have access to it now, that’s a lot,” he says. “Even if it’s not a complete break, it’s impressive.”