Page 102

Technology | Artificial Intelligence Another approach consists of eliciting wording. This is mainly due to the fact that the required knowledge from a human the search space only consists of the text expert. This method obviously reaches its of one clause, not a few hundred pages limits in overly complex applications where of a complex contract. And it is easy to huge amounts of rules would be required to see that identifying one particular date adequately represent the expert’s knowledge indication (e.g. the one characterising the (which turned out to be the main reason starting date of the contract) in a short for the failure of expert systems in the text is much easier than sorting out which 1980s). For moderately complex application of the dozens of dates listed in the complete scenarios, however, doing so can be text is actually the one we are looking for. extremely valuable. Let’s get back to the The useful expert example of information extraction from To summarise, expert knowledge about contracts in English language. the clause structure of contracts can be Such a contract typically has a strictly used to accurately identify the position of a defined structure. It can be decomposed into certain clause and then extract the relevant a number of clauses such as the termination information items within this clause. The clause, the indemnification clause, etc. expert knowledge helps to significantly Each clause comprises a relatively small set of information items that could be of interest for contract analytics. With this ANALYSING DATA Humans play a crucial in mind, it is relatively easy for a legal expert role in making sense of to come up with a complete list of contract large data collections clauses and enumerate the information snippets they contain. How does this help in our attempt to train an information extraction system for contracts using a non-deep learning approach? First of all, the system has to be told where to find the various clauses in a set of sample contracts. This can be easily done by marking the respective portions of text and labelling them with the clauses names they contain. On this basis we can train a classifier model that – when reading through a previously unseen contract – recognises what type of contract clause can be found in a certain text section. With a ‘conventional’ (i.e. not DL-based) algorithm, a small number of examples should be sufficient to generate an accurate classification model that is able to partition the complete contract text into the various clauses it contains. Once a clause is identified Combining the within a certain contract respective of the training data, a human can identify and label the strengths of reduce the number of training interesting information items both humans data – and thus, the human contained within. Since the and AI systems effort required to label them text portion of one single – and facilitates the actual clause is relatively small, only will form the information extraction by a few examples are required basis for many restricting the search space to come up with an extraction to the relatively short text of model for the items in one successful the current clause. particular type of clause. applications What does this tell us about Depending on the linguistic that currently the use of artificial intelligence complexity and variability of general and machine the formulations used, this exceed human in learning in particular? First model can be either generated of all, there exists no standard using ML, by writing extraction capabilities algorithm or even paradigm rules making use of keywords, alone serving all purposes equally or – in exceptionally well. In application scenarios where huge complicated situations – by applying natural amounts of data are easily available and language processing algorithms digging can be labelled without significant effort deep into the syntactic structure of each for human experts, a purely data-driven sentence. In any case, the resulting model approach might make perfect sense. can be expected to be fairly precise and Examples include image collections, such as robust to variations in the respective 102 Ethical Boardroom | Spring 2019

YFCC100M, a collection of almost 100 million images.7 The majority of these images have been labelled by their respective photographers. This means the effort of providing additional information about the training data could be distributed among a similarly huge number of human experts. In the Go example mentioned above, the system trained itself by simply playing games against itself. Labelling then could be fully automated – the system simply marked the data representing each match as won or lost. When no crowd-sourcing or automatic labelling is feasible, making use of expert knowledge as described above can be a good alternative. Doing so not only significantly reduces the amount of training data required and the effort of labelling them, it typically

also helps create much more transparent – and thus, comprehensible – models that can be easily maintained and repaired in case of insufficient performance that might be due to changes in the domain. Deep learning models, on the other hand, currently must be considered as black boxes whose internal behaviour cannot be easily explained – doing so is the goal of ongoing research. The ultimate lesson learned from these considerations is the soothing observation that humans will continue to play a crucial role for the foreseeable future. AI systems still need expert knowledge to make sense of data collections. Thus, combining the respective strengths of both humans and AI systems will form the basis for many successful applications that currently exceed human capabilities alone. Footnotes will be run when the article is published online

Profile for Ethical Boardroom

Ethical Boardroom Spring 2019