
2 minute read
BLOCKQUOTE
from AI and Law
to assist lawyers in quickly analysing vast amounts of legal information to identify relevant case law, statutes and regulations. This could help lawyers make better-informed decisions and improve the quality of their work. Assistance such as this may prove particularly useful for smaller legal teams and solo practitioners previously at a disadvantage in legal research due to their limited access to extensive resources and teams of lawyers.
Despite the many benefits of using AI tools such as these, there are also potential drawbacks to consider. One concern commonly voiced is that AI tools could replace human lawyers, leading to job losses in the legal industry. ChatGPT itself recently passed law exams in multiple courses at universities in the US (a laudable achievement, albeit only at the level of a C+ student).
However, while having the potential to make fundamental shifts to the ways lawyers spend their time, improve efficiency and reduce the amount of time spent on certain tasks, AI tools cannot (yet?) replace human judgment and legal expertise that lawyers bring to their work. As A&O has confirmed, Harvey is to them an innovative way of working rather than a “cost-cutting exercise”. As such, A&O state that Harvey will not replace any part of its workforce, reduce billable hours nor save money for the company or clients. PwC has echoed these sentiments.
Other concerns that have been levelled against the use of generative AI in the legal profession include its potential to “hallucinate” (state something convincingly as fact which is entirely made up) and whether such platforms will be given access to sensitive client data.
To limit concerns related to errors, A&O says that safeguards have been put in place at the model level for Harvey with all outputs still to be reviewed by qualified persons. On sharing client data, the law firm states that client confidentiality remains a key priority and that Harvey has, as a tool designed for the legal industry, multiple ways to ensure client confidentiality. It will, however, not interact with client data until A&O deems it safe to do so.
It should also be kept in mind that AI tools could perpetuate bias and discrimination. They are, afterall, only as good as the data they are trained on and if that data is biased or discriminatory, then the AI tool may well perpetuate that bias. This may be of particular concern in the legal industry. Training lawyers to be aware and reflective on these issues may prove necessary.
Despite these concerns, the use of AI tools in the legal profession is growing rapidly and it will continue to do so, with the industry likely to see more firms and in-house legal teams adopting AI tools like Harvey in the future. This is especially true in Asia, where the legal industry is rapidly expanding, and the demand for legal services is ever-increasing.
There is no doubt that generative AI has an enormous potential to revolutionise the legal industry by streamlining processes, increasing efficiency, and reducing costs. However, as with any new technology, there are also potential risks and drawbacks. It will be crucial for law firms to weigh the benefits and challenges of using new-age generative AI and to implement appropriate safeguards to mitigate any risks.