ARTIFICIAL INTELLIGENCE: AN ETHICAL FRAMEWORK
CREATING RESPONSIBLE
AI OPERATIONS
As the use of AI becomes ever-more pervasive, data scientists and organisations ‘just doing their best’ to make sure they behave ethically and responsibly will find that’s no longer good enough, says Scott Zoldi, Chief Analytics Officer at data analytics company FICO Artificial intelligence (AI) has become widely-used to inform and shape strategies and services across a multitude of industries, from healthcare to retail, and has even played a role in the battle against coronavirus. But the mass adoption and increasing volumes of digitally-generated data are creating new challenges for businesses and governments. There is, for example, a growing focus on the reasoning behind AI decision-making algorithms, and creating a responsible framework. Decisions made by AI algorithms can appear callous and sometimes even careless as the use of AI pushes the decision-making process further away from those the decisions affect. It is not uncommon for organisations to cite data and algorithms as the justification for unpopular decisions and this can be a cause for concern when it comes to respected AI leaders making mistakes. Examples can be seen across industries. In 2016, Microsoft’s racist and offensive online chatbot was blamed on AI, Amazon’s AI recruitment system ignored female applicants in 2018, and a Tesla car crashed in autopilot mode in 2019, mistaking a truck for a street sign.
76
TheFintechMagazine | Issue 19
Alongside the potential for incorrect decision-making, there is also the risk of AI bias. To help prevent these issues, new regulations have been created to protect consumer rights and monitor developments in AI.
THE PILLARS OF AI Organisations across the world must enforce responsible AI standards now. To do so, they need to formally document and enforce their model development and operational standards, and set them in the context of the three pillars of responsible AI, which are explainability, accountability and ethics. EXPLAINABILITY: Organisations relying on an AI decision-making system must ensure they have an algorithmic construct that captures and communicates the relationship between the decision variables to arrive at a final business decision. With this data at hand, businesses can explain the model’s decision – for example, a flagged transaction labelled as a high-risk fraud due to a high volume of transactions involving new accounts in Kazakhstan. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision. ACCOUNTABILITY: AI models must be properly built and focus has to be placed on the limitations of machine learning, and careful thought applied to the algorithms used. It is essential for technology to be transparent and compliant. Thoughtfulness in the development of models ensures the decisions make sense; for example, scores adapt appropriately with increasing risk of input features. Beyond explainable AI, there is the concept of humble AI – ensuring that the model is used only on the data examples similar to data on which it was trained. Where that is not the case, the model may
not be trustworthy and one should downgrade to an alternative algorithm. ETHICS: Adding to the requirements of explainability and accountability, ethical models must be tested and any discrimination removed. Explainable machine learning architectures allow extraction of the non-linear relationships that typically hide the inner workings of most machine learning models. These non-linear relationships need to be tested, as they are learned based on the data on which the model was trained and this data is all-too-often implicitly full of societal biases. Bias and discrimination are tested and removed in creating ethical models, and should be continually re-evaluated when the model is in operation. Once these three measures are introduced, organisations can feel confident that the decisions they make are sound digital choices, and they know all models will follow this framework.
MEASURES TO ENFORCE RESPONSIBLE AI There is no question that building responsible AI models takes time and is painstaking work. But the meticulous scrutiny is a necessary, ongoing process to ensure AI is used responsibly. This scrutiny must include regulation, audit and advocacy. Regulations play an important role in www.fintechf.com