2 minute read

Essay Compeition Winner

ESSAY COMPETITION WINNER Legal Decisions Should Be Automated

Using Algorithms. Discuss.

Advertisement

This essay was a winning entry to the Immerse Education 500 Word Essay Competition, an organisation that aims to stretch the most able pupils by creating challenges and opportunities. Congratulations, Sapphyre!

Automation is the concept of “finding a substitute for nerve and brain” and increases productivity whilst reducing human involvement. Since its birth at Ford, automation has been limited to mundane activities. However, artificial intelligence (AI) provides opportunities for it to be applied to circumstances involving more complex thought – such as making a legal decision. After all, it is “difficult to ignore the fact that code can be used to produce regulatory effects similar to law”. Comparing decision theory in law and science is the primal form of how law and technology could interact. Legal decisionmaking is reliant on truth and justice whilst that of technology is confined to fact: “equality before the law is not the same as uniformity for the scientist”. This factor may suggest that automating legal decisions would eliminate a lawyer’s unconscious bias. However, this is not entirely the case. In fact, biases seem to be equally as prevalent in automated systems because algorithms are restricted to previously entered data which can “allow [them] to inherit the prejudices of prior decision makers”. The 2020 case in which GCSE and A-Level grades were algorithmically assigned showcases that “historical data cannot reliably predict answers in new situations” and that in automated decision-making it is the third party (the students) who suffer from algorithmic misjudgements.

Similarly, “errors committed by automated law systems are centralized” so negative externalities –where neither the system nor user but the third par ties suffer – would be common. This is hugely counter-productive particularly in the automation of taxpaying where “the group that bears the burden of that error is the general taxpaying public”. Therefore, for legal automation to develop successfully, the fall of liability must be adapted to relay errors onto the system so that “incentive to redesign behaviour for legal advantage” is mitigated and the authenticity of computerised legal decision-making is not compromised. The application of rolereversibility enables “democratic legitimacy” even without the interference of AI. Therefore, this indicates that “the legal system will be ready for robojurors and robo-judges when it incorporates robodefenders” so for now, we must “insist on human judgement as an essential aspect of certain kinds of decision making”. Even with accurate coding, “we should expect errors in translation” as programmers tend to have limited law knowledge anyway. Nevertheless, misjudgements are widespread within non-automated legal decision-making, so how do automated inaccuracies differ from this? Perhaps more effective would be a blended approach by expert lawyers and programmers to formulate the algorithms rather than them managing their respective concerns individually. Ultimately, the automating of legal decisions provides an invaluable prospect for technology to be incorporated into law. Although caution should be taken, the only way that it can prosper is through testing and so it should be given a chance. However, human involvement should not be diminished. An automated legal system may be able to acquire intelligence but human intuition and opinion are far more complicated to replicate and it is for this reason that I believe legal decisions should not be fully automated using algorithms.

This article is from: