PRMIA Intelligent Risk - June, 2018

Page 1

INTELLIGENT RISK knowledge for the PRMIA community

June 2018 ©2018 - All Rights Reserved Professional Risk Managers’ International Association


PROFESSIONAL RISK MANAGERS’ INTERNATIONAL ASSOCIATION CONTENT EDITORS

INSIDE THIS ISSUE

Steve Lindo

003

A letter from PRMIA leadership

Principal, SRL Advisory Services and Lecturer at Columbia University

005

Editor’s introduction

006

Using automated machine learning to minimize model risk by Seph Mard

010

PRMIA member profile - by Adam Lindquist

012

Data privacy & protection: challenges & opportunities in the era of cyber crime - by Vivek Seth

016

EMEA Risk Leader Summit - by Kathryn Kerle and Cosimo Pacciani

018

Intelligent loan portfolio modeling - Alex Glushkovsky

022

Using the Johari Window technique for liquidity risk management by Jason Rodrigues

028

Machine learning and AI in risk management - by Stuart Kozola

034

The fall of Lehman Brothers by Xiao Wang, Xiaofan Hou, Junchao Liao & Tuershunjiang Ahemaitijiang

040

Portfolio allocation incorporating risk policies via deep learning by Patrick Toolis

045

Managing business traveler cyber risk - by Moh Cissé

051

Academics probe RegTech ERM solutions - by Peter Hughes

054

Risk platform transformation in the digital era - by Peter Plochan

058

Spreadsheet risk case studies. why should the CRO care? by Craig Hattabaugh

061

Bringing technology trends into banks - by Alex Marinov

064

Calendar of events

Dr. David Veen Director, School of Business Hallmark University

Nagaraja Kumar Deevi Managing Partner | Senior Advisor DEEVI | Advisory Services | Research Studies Finance | Risk | Regulations | Analytics

SPECIAL THANKS Thanks to our sponsors, the exclusive content of Intelligent Risk is freely distributed worldwide. If you would like more information about sponsorship opportunities contact sponsorship@prmia.org.

FIND US ON

prmia.org/irisk

002

@prmia

Intelligent Risk - June 2018


letter from PRMIA leadership

Justin M. McCarthy Chair, PRMIA

Kraig Conrad CEO, PRMIA

Welcome to your Summer read! We hope you have the opportunity to enjoy time off, reflect on your career, and consider fondly the great work we can do together on behalf of our community.

new vice chair It is with great honor to have Robert McWilliam, Managing Director, ING Bank NV, move into the Board Vice Chair position. Robert was duly elected by Chapter Regional Directors to serve this important role in the community. We thank Ken Yoo, Chief Risk and Compliance Officer at LeasePlan USA, for his years of service as Vice Chair. He will continue to serve on the Board with focus on expanding events to increase relevant engagement in service of our mission.

member recognition We recently celebrated the commitment and service of three long-standing volunteers, Mark Abbott, Dan Rodriguez, and Ken Radigan. Their dedication continues to make PRMIA strong. Mark, Dan, and Ken, thank you for all that you do!

Intelligent Risk - June 2018

003


community member survey You soon will receive a survey to assess value you receive from the many PRMIA offerings. Feedback is confidential and provides powerful information to the Board, staff, and our wonderful volunteers to build your future PRMIA.

fall events Your peers welcome you at the many events designed to be engaging and relevant to your career growth and that of your teams. We hope you can join us for these featured events: Banking Disrupted, Canadian Risk Forum, and EMEA Risk Leader Summit. Visit prmia.org/events for the full calendar.

make an impact on our profession As we do with each edition of Intelligent Risk, we invite you to join us on our journey to serve the global risk profession. Many find their volunteer experience personally and professionally rewarding as their work helps define the future of our profession. Please join your peers in shaping the PRMIA future by visiting prmia.org/volunteer.

Justin McCarthy Chair, PRMIA

004

Intelligent Risk - June 2018

Kraig Conrad Chief Executive Officer, PRMIA


editor introduction

Steve Lindo Editor, PRMIA

Dr. David Veen

Nagaraja Kumar Deevi

Editor, PRMIA

Editor, PRMIA

welcome to the summer Intelligent Risk issue of 2018 The theme of Risk Technology was chosen for this issue of Intelligent Risk in recognition of the increasing use of advanced technologies in financial services and other industries, which presents challenges to traditional risk management practices as well as opportunities to adopt more technically sophisticated methods. The articles submitted by PRMIA members for this issue cover a wide range of topics, which can be grouped under four main categories: automation of banking operations, risk process transformation, advanced risk/ return modeling, and cyber risk/data protection. Individually, each article presents a unique perspective, for example a new metric for enterprise risk measurement, machine learning for model development and validation, neural networks for portfolio optimization and cyber protection for mobile communications. Together, they present a set of views which is diverse, but only addresses a sliver of the changes, innovations and challenges which comprise the future of Risk Technology. Consequently, we fully expect to return to this theme in future issues. To this issue’s authors, we express our appreciation for their thoughtful contributions.

Intelligent Risk - June 2018

005


using automated machine learning to minimize model risk

by Seph Mard In recent years, the big data revolution has expanded the integration of predictive models into more and more business processes. This provides a great amount of benefit, but it also exposes institutions to greater model risk and consequent exposure to operational losses. When business decisions are made based on bad models, the consequences can be severe. The stakes in managing model risk are at an all-time high, but luckily automated machine learning provides an effective way to reduce model risk. In 2011, the Federal Reserve Board (FRB) and the Office of Comptroller of the Currency (OCC) issued a joint regulation specifically targeting Model Risk Management (respectively, SR 11-7 and OCC Bulletin 2011-12). This regulation laid the foundation for assessing model risk for financial institutions around the world, but was initially targeted towards Systemically Important Financial Institutions (a.k.a., SIFIs), which were deemed by the government to be “too big to fail” during the Great Recession. Now, regulation is being targeted towards much smaller banks in the U.S. The Federal Deposit Insurance Commission (FDIC) recently announced its adoption of Supervisory Guidance on Model Risk Management, previously outlined by the FRB and OCC. The FDIC’s action was announced through a Financial Institution Letter, FIL-22-2017. The new regulation greatly reduces the minimum threshold for compliance for banks from $50 billion to $1 billion in assets. This will require large capital investments from regional and community banks to ensure alignment to regulatory expectations; something that the SIFIs have a head start. At the heart of this regulation is the notion of “model risk.” You might be thinking what is model risk, and how can it be mitigated? This is a complicated question, but before we dive in to model risk, another simpler question that must be answered first. What is a model? The regulators have provided a universal definition that has been adopted across the financial industry. They define a model to be “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

006

Intelligent Risk - June 2018


Therefore, if a process includes inputs, calculations, and outputs then it falls under the regulatory classification of a model. This is a broad definition, but since the intent was to mitigate model risk, a broad definition of a model was established to maximize the impact of the regulation. If there is any doubt on the classification of a process, regulators wanted to encourage banks to err on the side of “model.” With the definition of a model now in place, the regulation next defined model risk as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.” In other words, model risk can lead to tangible losses for the bank and its shareholders. Regardless of where a bank is using a model in their enterprise, model risk primarily occurs for two reasons: 1. A model may have been built as it was intended, but could have fundamental errors and produce inaccurate outputs when compared to its design objective and intended use; and, 2. A model may be used incorrectly or inappropriately, or its limitations or assumptions may not be fully understood. The need for an effective Model Risk Management (MRM) framework can be demonstrated with countless case studies of recent MRM failures. For example, Long Term Capital Management was a large hedge fund led by Nobel laureates of economics and world class traders, but ultimately failed due to unmitigated model risk. In another recent example, a large global bank’s misuse of a model caused billions of dollars in trading losses. The details of these examples are often the topic of Business School case studies and debate, but there is no arguing that model risk is very real and must be managed. But, how? The FDIC’s new regulation can be broken down into three main components used to manage model risk: • Model Development, Implementation, and Use - The initial responsibility to manage model risk is on those developing, implementing, and using the models. Model development relies heavily on the experience and judgment of developers, and model risk management should include disciplined model development and implementation processes that align with the model governance and control policies. • Model Validation - Prior to the use of a model (i.e., production deployment), it must be reviewed by an independent group - Model Validation. Model validation is the set of processes and activities intended to independently verify that models are performing as expected, in line with their design objectives and business uses. The model validation process is intended to provide an effective challenge to each models’ development, implementation, and use. The model validation process is crucial to effectively identify and manage model risk. • Model Governance, Policies, and Controls - Strong governance provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for testing that policies and procedures are being carried out as specified. This includes tracking the status of each model on an inventory across the entire enterprise. Intelligent Risk - June 2018

007


Initial alignment to these new regulatory requirements required SIFI banks to invest millions of dollars to build new processes and teams, and now that same burden lays with community and regional banks. It is impossible to over-emphasize the need for an institution to have sufficient model governance, policies, and controls. Regardless of the technology at the disposal of the model developers or model validators, there is no replacement for a sound model governance process. But, isn’t there a more efficient way to use technology to reduce model risk, while increasing the transparency and auditability of the model development, implementation, and use process? I feel that the answer to that is an unequivocal “YES!” Traditional model development methods are time-consuming, tedious, and subject to user error and bias. Instead of manually coding steps (such as variable selection, data partitioning, model performance testing, model tuning and so on), best practices can be automated through the use of automated machine learning. Automated machine learning allows for easy replication of the model development process, which gives model validators more time to independently assess and review the model and its potential limitations, and ultimately drives value for the validation process. Automated machine learning is a cutting-edge approach by which Artificial Intelligence (AI) is used to select the best machine learning algorithms for making predictions based on historical data. True automated machine learning will automate the complete model development workflow in a transparent and replicable way, starting from initial feature engineering and preprocessing, discovering the most optimal workflows and algorithms with automated insights, and ultimately allowing the user to easily deploy the model into production with minimal technical overhead. This is no doubt ambitious, but companies like DataRobot are driving this vision across the industry and making the vision a reality. It is obvious that automated machine learning is a powerful modeling and analytics technology; but it is more than just powerful. It is a game changer. The new field of automated machine learning offers a much stronger framework for model development and validation than traditional manual efforts, while more closely aligning to the ever-increasing regulatory requirements and vastly reducing “model risk.” It delivers the tools to optimize and accelerate model risk management, making it easier for banks of all sizes to gain value from a robust model risk management framework.

author Seph Mard As the head of Model Risk Management at DataRobot, Seph Mard is responsible for model risk management, model validation, and model governance products, as well as services. Seph is leading the initiative to bring AI-driven solutions into the model risk management industry by leveraging DataRobot’s superior automated machine learning technology and product offering. Seph has more than 10 years of experience working across different banking and risk management teams and organizations. He started his career as a behavioral economist with a focus on modeling microeconomic choices under uncertainty and risk, then transitioned into the financial services industry. Seph is a subject matter expert in model risk management and model validation. 008

Intelligent Risk - June 2018


Access Global Insights A PRMIA Sustaining membership includes unlimited digital access to The Wall Street Journal for one year. Join now for insights from the world’s leading source of global business and financial news.

Become a member at www.PRMIA.org

Š 2018 Dow Jones & Company, Inc. All rights reserved.


PRMIA member profile - Israel Renteria Lopez

by Adam Lindquist, Director of Membership, PRMIA Adam

How did you decide to choose a career in risk management?

Israel I’d say it was an alignment of serendipitous factors with my own interests and professional attributes. Whatever the reasons, I find the field to be fascinating, ever-evolving, and very challenging.

Adam

You hold both a CFA® Charter and a PRM™ Designation. How do they complement each other?

Israel The CFA Charter provides a great lens and framework through which one can develop working knowledge of the finance and investment industries; it is both broad and deep in its curriculum. The PRM Designation is slightly less broad (in the sense that it focuses mainly on the financial sector), but it is just as deep as the CFA Program, providing relevant and applicable insight into the risk management profession. As a risk management consultant focused in the financial services industry, both certifications have functioned as a springboard for engaging in current, relevant, and challenging topics for my clients. Adam

Why did you choose to pursue the PRM?

Israel Having no formal training in risk management (only work experience), I found the spectrum of topics covered by the PRM Program were not only relevant, but foundational for anyone seeking to advance their career in risk management. Adam

What do you find are some of the benefits of being a PRMIA member?

Israel On a practical level, the benefits I’ve found most useful are the thought leadership Webinars and the digital access to The Wall Street Journal, which greatly facilitate the never-ending pursuit of being up-to-date. I also see tremendous potential as the risk management community grows and becomes more mature in the local market (Mexico). Adam

How useful has having the PRM been in your career development?

Israel From a foundational standpoint, the PRM designation has been extremely useful in providing me with a knowledge base in all sub-disciplines of risk management. On the other hand, I think that I’m only beginning to scratch the surface of other PRM benefits such as community, networking, and continuous learning. 010

Intelligent Risk - June 2018


Adam

How have you applied what you learned?

Israel Having gone through the rigorous and technical curriculum allowed me to better craft solutions and/or responses to clients as a financial services consultant. And now as an investment professional, the risk governance and credit risk frameworks are very helpful for analyzing credit securities. Adam

How do you see the risk profession changing in the next 5 years?

Israel The risk profession will certainly continue to grow and mature in the next half decade, especially in the face of all the exciting changes that our organizations, and we as individuals, will face: big data, AI, crypto-assets, ongoing regulatory efforts, and changing market demographics. It’s a very exciting time to be in the risk profession, and I think that those involved in an organization such as PRMIA are better poised to handle these waves of change.

interviewee Israel Renteria Lopez, PRM, CFA Credit Securities Manager, Scotia Wealth Management

Intelligent Risk - June 2018

011


data privacy & protection: challenges & opportunities in the era of cyber crime

by Vivek Seth Any enterprise, whether public or private, creates or possesses information during its work life-cycle that needs to be protected from unwanted circulation. Such data could be intellectual knowledge or trade secrets that provide competitive advantage; internal information such as payroll details, budget figures, employees’ personal records that need to be restricted; customer data such as client credit history, or financial details that are required to be protected as part meeting regulatory obligations. While the reasons for protecting data may vary, organizations face various challenges in safeguarding data pertaining to their customers, employees, and organizational strategy. This task is becoming even more demanding in current internet driven era where cyber crime add a new dimension to data breach threats. The year 2017 was a testimonial to this growing cyber risk trend with global scale data incidents reported like ransomware attacks such as ‘WannaCry’1, ‘Petya’1; successful theft of millions of customer credentials from Equifax2, Uber3 and other well-known corporations; and data leaked for reasons other than financial gains such as Paradise Paper leak4. In this era of digitalization, organizations across the world are exposed to threats such as network intrusion, denial of service, phishing, social engineering attacks where attackers can cause damage remotely without ever physically stepping into the premises. Key challenges faced by the enterprises are: • Maintaining a robust IT infrastructure: Security vulnerabilities may arise if a firm doesn’t upgrade its IT infrastructure with evolving security threats and when it doesn’t implement adequate controls while adopting new technologies such as cloud computing. Having a robust security environment is relevant even for non-technology industries, as seen in the case of Pizza Hut security breach where credit card details were compromised and customers weren’t told about it until two weeks after the intrusion5. • Time spent in detection & containing data breaches: According to ‘Ponemon 2017 Cost of Data Breach Study’6 - a study performed across 419 companies and 13 countries, it takes an average of 191 days to identify a data breach and an average of 66 days to contain it. In principle, an organization should always stay on alert to detect unusual signs of information security activities, although it is easier said than done as agile cyber attackers can adapt faster compared to traditional organizations. • Insider threats: Employees could prove to be a weak link in corporate data protection, whether deliberately or out of oversight. Employee cases continue to torment companies. For example, information relating to more than 500,000 customers was copied and removed inappropriately by an employee of Bupa, a private healthcare firm7. Aside from external threats, enforcing compliance towards data protection within an organization is very crucial. 012

Intelligent Risk - June 2018


• Regulatory obligations: Regulatory reprimands & fines could prove to be a grave damage to a company’s balance sheet as well its reputation. This is especially relevant for institutions dealing with customer data such as US Health Care providers that are required to comply with HIPAA8 amongst other legislations. Under the EU’s General Data Protection Regulations (GDPR), which is now in effect, organizations with breaches pertaining to EU customer data can be fined up to 4% of annual global turnover or €20 million, whichever is greater9. Clearly, ensuring compliance with respect to regulatory and industry standards is an important organizational responsibility. • Organized Cyber-attackers: The cyber attack arena is no longer confined to amateur programmers seeking intellectual fun but is now dominated by dedicated hackers motivated by financial gains, economic sabotage, and/or political beliefs. These skilled attackers could be part of an organization, or even a state sponsored group as seen in the case of ‘WannaCry’ malware attack for which US & UK held North Korea responsible10.

Intelligent Risk - June 2018

013


There is a positive side to the rising trend of cyber attacks and data breaches as well, with key opportunities as follows: • Favorable circumstances in seeking IT infrastructure budget: IT budgeting at times can be challenging as allocating business resources can take priority. With recent cases of data breaches and emerging stringent date privacy regulations, increased spending on data security upgrades should now be more justifiable. Gartner forecast that worldwide security spending will reach $96 billion in 2018, up 8 % from 201711. Timely IT upgrades are a much better alternative compared to potential regulatory fines & customer complaints that could arise in data theft situations. • Suitable time to enhance business and IT controls: Often a challenge in convincing business to adhere to security policies is that controls are perceived as impediments to doing business. Changing landscape of rising cyber threats would enable senior management to rationalize enforcement of efficient security focused controls. While designing new controls, the risk community can also review existing controls on areas such as access management, segregation of duties, network security for efficiencies and enhancements. • Learning from data breaches across industry: Data incident attempts & attacks both inside the organization as well across the industry can be seen as an opportunity to remediate identified security vulnerabilities. Such monitoring should take into account how the organization’s patch management, risk assessment & incident detection management compare with the industry standards. Keeping an active watch on emerging security trends would enable corporations in adequately safeguarding against future cyber attacks. • Enhancing customer trust & building new business relationships: Organizations that adapt proactively in enhancing data protection framework can offer a compelling business offering to its existing and potential customers. For customers for whom data privacy & protection is valuable, a natural preference would be to do business with firms that are well recognized for upholding data security standards.

conclusion Data privacy & protection play an important part in a corporation’s long term viability in this age of globalization, internet commerce, outsourced agreements, and evolving regulatory requirements. Company data is a valuable asset that require diligent care and strong security access controls, else it may fall in the hands of malicious attackers resulting in reputational damage, customer complaints, as well as regulatory reprimands. The challenges of data privacy & protection should be seen as an opportunity where organizations that are proactively adapting to changing cyber security landscapes can gain a competitive advantage in safeguarding themselves against future attacks, building customers’ trust as well as attracting new business.

014

Intelligent Risk - June 2018


references 1. Source: www.theguardian.com, “WannaCry, Petya, NotPetya: how ransomware hit the big time in 2017”, 30 Dec 2017. 2. Source: www.bbc.com, “Massive Equifax data breach hits 143 million”, 8 Sep 2017. 3. Source: www.bbc.com, “Uber concealed huge data breach”, 22 Nov 2017. 4. Source: www.bbc.com, “Paradise Papers: Everything you need to know about the leak”, 10 Nov 2017. 5. Source: www.mirror.co.uk, “Pizza Hut security breach by hackers as company admits credit card details of customers ‘may have been compromised’”, 16 Oct 2017. 6. Source: https://www.ibm.com/security/data-breach 7. Source: www.bbc.com, “Bupa data breach affects 500,000 insurance customers”, 13 July 2017. 8. Source: www.hhs.gov, “Summary of the HIPAA Security Rule”. 9. Source: www.eugdpr.org, “GDPR Key Changes”. 10. Source: www.bbc.com, “Cyber-attack: US and UK blame North Korea for WannaCry”, 19 Dec 2017. 11. Source: www.gartner.com, “Gartner Forecasts Worldwide Security Spending Will Reach $96 Billion in 2018, Up 8 Percent from 2017”, 7 Dec 2017.

author Vivek Seth Vivek Seth has been working in the Risk Management area for over 14 years. Currently working in the Singapore Financial sector, his work experience extends across Singapore, Australia and India, along with business assignments carried out in Hong Kong and Switzerland. He holds an M.B.A. and also the PRM™ professional certification. This article presented here represents the author’s personal views and not those of his current/previous employers or that of any professional bodies he is associated with.

Intelligent Risk - June 2018

015


EMEA Risk Leader Summit

by Kathryn Kerle and Cosimo Pacciani We are delighted to be holding the 3rd edition of the EMEA Risk Leader Summit in London at some very exciting times for the risk profession. The RegTech and FinTech world has made considerable progress and real innovation, that is, truly new ways of doing things - are in train. As always, there is much to uncover and debate as a wave of new entrants seek to disintermediate banks. We will gather some of the best thinkers in EMEA to truly understand the digital transformation and where it takes us as risk practitioners and our enterprises. You will enjoy two days of conversations with your practitioner peers about process and workflow automation, 3rd party risk, risk infrastructure and all of the changes likely to take place. Find out how Machine Learning and Cognitive Agents are applied in our industry to leverage key insights from data lakes in the hybrid cloud. Geopolitical considerations will feature in our Summit, and our expert panel will provide key insights on rule-based regulation in the age of Brexit and Trump, the side effects of rising borderline-authoritarian regimes in Europe, and the implications of growing geopolitical tensions on cyber security. Managing a global workforce in the age of AI will be considered as we seek how best to deal with the challenge of integrating a more diverse work force complemented by new technology into our risk management practices. We will also try to understand how society is learning to adapt to the pace of technological development and how consumer patterns and a more horizontal society will have an impact on how we manage the inherent risks. Blockchain has had considerable momentum as a new type of infrastructure, and there are various competing approaches to the use of distributed ledger technology, any of which has the potential to transform the infrastructure of our financial system. This technological leap has the potential to vastly improve the infrastructure currently used by the global financial system. It also has the potential to expose that system to new risks that are only partially understood. Rising temperatures and unpredictable weather have raised general awareness of climate change and its potential impact on banks, both directly and as a result of its impact on customers. Fortunately, new approaches to identifying, assessing and managing the attendant risks are emerging, and we in the finance industry have a role to play by helping the world protect the environment. This is just a short preview, and we are readying many more interesting insights for you on November 14th-15th at the EMEA Risk Leader Summit.

016

Intelligent Risk - June 2018


authors Kathryn Kerle Kathryn Kerle has been the Head of Risk Reporting at Royal Bank since April 2011. In that capacity, she manages a team responsible for the production of risk reports, including the risk sections of the annual reports of the bank and its major subsidiaries and the Pillar 3 disclosure as well as internal reporting of risk to senior risk committees. Kathryn joined RBS after spending 13 years with Moody’s Investors Service in various management and analytical capacities in London, New York and Singapore. Prior to joining Moody’s, Kathryn ran Paradigm, a financial consultancy based in Sydney and focused on credit risk management. She began her banking career at Chase Manhattan Bank in New York. Kathryn holds a Masters degree in Arab Studies from Georgetown University and a Bachelors degree from Bryn Mawr College, both of the US.

Cosimo Pacciani Cosimo Pacciani has been the Chief Risk Officer for the European Stability Mechanism since 2015. He has joined this key European institution as Senior Credit Officer and Deputy Head of Risk in 2014. Previously, he spent 20 working in the City of London. He has been for eleven years at Royal Bank of Scotland, where he has been Chief Operating Officer for the Group Credit Risk function and Head of Risk and Compliance for the Asset Protection Scheme, the mechanism established to rescue the British banking system. Previously, at RBS he was Head of Credit Risk for Corporate and Public Institutions in Europe. He has worked previously in portfolio management at Credit Suisse First Boston and for the London branch of Monte dei Paschi di Siena in London, dealing with derivative products and portfolio management. He holds a Ph.D. from the Faculty of Economic Sciences of the University of Siena and a Master Degree from the Faculty of Economics of the University of Florence.

Intelligent Risk - June 2018

017


intelligent loan portfolio modeling

by Alex Glushkovsky shifting analytical practices Effective and efficient loan portfolio management today means shifting from querying and reporting to more sophisticated analytical practices that can support knowledge discovery and timely controllability on a micro level. To illustrate the very challenging environment that lenders are facing today, let us consider a hypothetical consumer credit portfolio. Figure 1* illustrates some of the possible portfolio vectors of factors, Key Performance Indicators (KPIs), and their interrelationships. Figure 1. Illustration of Factor/KPI Interdependence Network

Obviously, the relationship network structure and its dependencies are very complicated. To successfully manage this portfolio, it is necessary to focus on not just credit risk aspects but on revenue, cost and profitability. Customer behavior is primarily sporadic rather than systematic. There are a lot of factors which are latent and cannot be measured. Moreover, the same factor can play the role of an input variable as well as an interim output variable or KPI. For example, changes in regulatory requirements are usually triggered by macroeconomic situations but, after coming into force, influence macroeconomic results. In addition, most relationships have delayed responses and their dynamics should be taken into consideration. 018

Intelligent Risk - June 2018


Consequently, analytical support for loan portfolio risk management has an essential role in dealing with today’s complicated environment and is moving towards intelligent modeling. An illustrative map of analytical evolution is presented in Figure 2*. By contrast, ad hoc analyses that are simple data manipulations and queries provide easily understandable but very limited views, which typically give answers to only very specific questions. Having limited predetermined views, the questions may be biased with some dominant hypothesis. Consequently, the outcome may be misleading, by failing to take all important inputs into consideration and to understand interdependencies. Figure 2. Illustration of Analytical Evolution

More systematic reporting and BI implementations commonly address dynamic changes concerning portfolio KPIs, such as revenue, costs, losses, delinquency amounts, or profits, as well as their potentially underlying factors such as customer geo-demographic, products, or channels. However, these views are still limited within predetermined dimensional cubes and, therefore, usually cannot support analytically proven root-cause analysis, simulations, “What-If� scenario generations and, most importantly, analytically driven guidance.

Intelligent Risk - June 2018

019


traditional vs advanced modeling techniques Advanced analytics includes a wide spectrum of techniques that can be supportive for portfolio management. The top ones today are exploratory and predictive modeling, simulation, and constraint optimization. Modeling allows focus on discovering important predictive factors and their interactions. It is a very powerful tool in root causes analysis to understand changes in portfolio KPIs. Modeling also establishes a platform for future simulation and optimizations. The former can be used for running accelerated, inexpensive, and non-disturbing virtual experiments, while the latest finds the optimal actions considering the business objectives and existing constraints. Traditional modeling techniques, such as regression models or decision trees, support targeted segmentations, predictions, or root-cause analysis. These regression models can handle linear relationships, but interactions or non-linear effects are usually problematic considering real-world cases of hundreds or even thousands of potentially predictive variables. Both regression models and decision trees are quite business-transparent and can even incorporate business knowledge by enforcing some attributes. For example, a monotonic dependency with a known direction of impact can be constrained while training regression models. Unsupervised algorithms, such as clustering, principal component analysis, or more sophisticated autoencoders, can be useful in addressing potentially predictive portfolio factors with respect to their similarity or information values regardless of the business objectives. They can be used as an interim exploratory step that significantly reduces dimensionality. Furthermore, clustering can support following stochastic simulations by defining distributions of variables of interest within each identified cluster. Modern modeling algorithms known as machine learning, especially deep learning, provide an unprecedented accuracy of predictive power of the trained models while preventing overfitting. However, the challenge is that they are less transparent and usually cannot be applied directly for input optimizations. Classical modeling techniques assume independence of predictive variables. Observing portfolio factors and KPIs in Figure 1, it should be noted that there is a very complicated structure of relationships and that it is not the traditional “multi-input -> output” modeling architecture. Therefore, intelligent portfolio modeling has to assume an interrelated system of models. The latest is a very challenging analytical problem, but building a network of such models allows for simulation and finding optimal solutions. Machine learning algorithms are very complicated with highly customized systems, and they cannot be practically presented as algebraic equations or simple “if-then” rules forcing simulation to play a role. Today, stochastic simulation techniques are shifting from simple random generators to applications with the embedded ability to introduce cross variable correlations, Markov chains, and agent-based modeling, and allow for dynamical simulations. An emerging trend will probably be merging machine learning capabilities with simulation techniques. Results of such combined machine learning and simulation techniques can be plugged-in to find solutions to constraint optimizations.

020

Intelligent Risk - June 2018


data and implementation All analytical techniques mentioned above are useless without data. The successful implementation of intelligent portfolio modeling requires: 1. A wide but relevant spectrum of data sources, such as customer and product info, transactional records, and inputs from third parties, all at the most granular level 2. Long coverage of time related data including through-the-cycle macroeconomic changes 3. Substantial variability of factor levels and their combinations (interactions), meaning that the business “experimented” sufficiently for learning. Intelligence is about knowledge and adequate actions. In the highly competitive and sophisticated business environment, analytics helps with gathering knowledge and recommending optimal actions. In summary, intelligent portfolio modeling might possibly be presented as a simple “equation” that is still quite challenging to solve: Intelligent Portfolio Modeling = Discovery + Simulation + Optimization + Monitoring It means (1) discovering important factors, their interactions and impacts on portfolio business KPIs, (2) stochastic simulations of factors of interest for given conditions, (3) ultimately finding optimal actionable solutions for the defined objective function within applicable constraints followed by (4) systematic monitoring of results. * Figures are for illustration purposes only and assume an arbitrary list of objects, relationships, and locations

author Alex Glushkovsky Dr. Alex Glushkovsky is a Principal Data Scientist at BMO Financial Group. He has over 30 years of diverse industrial, consulting and academic experience. Alex holds a PhD in mathematical modeling and optimization of technological processes and an Honours MSEE. He is a Fellow Member of The Royal Statistical Society and a Senior Member of the American Society for Quality. Alex holds CRE and CQE by ASQ, and the PRMTM professional certifications. He has been awarded for outstanding instruction of the Economics for Managers course, Ellis MBA, NYIT. Alex has published/presented over 30 research papers on the statistical analysis and analytical management in International editions. This article represents the views of the author and do not necessarily reflect the views of the BMO Financial Group.

Intelligent Risk - June 2018

021


using the Johari Window technique for liquidity risk management

by Jason Rodrigues unknown unknowns Complex systems such as the global financial system and their behaviour have “unknown unknowns,� which result in an unpredictable path from their current state to a better state. However, patterns emerge over time, allowing such paths to be rationalized in retrospect. Early Warning Indications (EWIs) can be constructed to detect what can possibly emerge from thin air and cluster into potential systemic risk. Taking learnings from real world catastrophes such as the 2011 Japanese Tsunami and the financial crash of 2008-9, these unknowns can be seen to form a pattern, but only in hindsight.

The Johari Window1 technique, an established psychology framework, is well positioned to broaden our current knowledge of knowns and unknowns. Applying the Johari Window technique may be the answer to detecting unknowns and acquiring hidden insights undiscovered so far.

022

Intelligent Risk - June 2018


*Developed by Joseph Luft and Harry Ingram in the 1950’s, the ‘Johari Window’ technique helps illustrate and improve self-awareness and mutual understanding between individuals within a group. Usage of this ‘’Disclosure/Feedback model for self-awareness’’ is especially relevant in complex systems, due to its emphasis of behavior, co-operation, inter-group development in hyper-connected systems/organizations. Johari Window quadrants: • Arena – What is known to self and others • Blind Spot – What is unknown about self, but which others know • Façade – Knowledge of self which is hidden from others • Unknown Unknowns – An exploratory area unknown to anyone

the japanese tsunami 11th March 2011 was a sad day in the history of Japan. A tectonic plate shift caused the deaths of over 25,000 people (7,500 are still declared missing since 2011). A financial loss of over US$ 199 Billion was declared by the government. The overall loss of about US$ 235 billion makes this one of the worst-ever catastrophes. Were there were relevant signals which, if caught and the relationship between them found ahead of time, would have helped reduce the impact of this devastation? Time: 14:36 Japanese local time: Tremendous energy was released along the 500 km fault line off the Pacific coast of Tohoku due to a tectonic plate shift. The first tremors of a powerful earthquake were felt in Tokyo. The tremors lasted over 90 seconds. Initial estimates placed the earthquake’s power at over 7.5 on the Richter scale. Analysis: this was in fact the first wave, called P (Primary Wave) in earthquake parlance. Travelling at the rate of 8km/sec through the earth’s crust, it took just 90 seconds for the waves to travel from the epicenter to Tokyo. Time: About 14:44 Japanese local time: The second powerful waves called S (Secondary Wave) travelled through the earth at the rate of about 4 km/sec to reach the Japanese coast in about 9 minutes.

Intelligent Risk - June 2018

023


More deadly than the P wave, this wave shook the earth for over 5 minutes, increasing the earthquake rating to over 8.5 on the Richter scale. The Phenomenon The stress released by the tectonic shift was equivalent to the release of over 200 million atomic bombs dropped on Hiroshima and Nagasaki. Since this was in the middle of the vast ocean, the powerful energy displaced only about a few meters of water at the fault line. However, the energy released was so sudden and powerful that it created giant ripples that travelled across the surface to the coasts. It gathered momentum as it progressed. At its peak, it travelled at the speed of a jumbo jet (about 600 km/hr). The waves gradually rose in amplitude as they travelled to the shore. The shallow waters at the coast made the waters rise to greater heights as they approached shore. In Tokyo, liquifaction, a phenomenon of earth breaking up and oozing water was observed in open areas. It took close to one hour before the first killer waves made lethal impact on lives and livelihoods along the Japanese coast. The earthquake was finally declared as being a 9 on the Richter scale. Finally, “Murphy” showed no remorse by causing the Fukushima Daiichi nuclear disaster, where Tsunami waters doused the battery supply and eventually the failing batteries resulted in overheating of the reactors.

Looking back in hindsight, were there sufficient traces to identify signals for early warning? Can “P Wave”, “S Wave”, “Liquefaction”, “Wave Amplitudes”, “Wave Speed” and “Run up” provide sufficient data to minimize the impact of natural disasters like the Japanese Tsunami of March 11, 2011? Do significant disruptions in the financial markets leave traces of “P Wave”, “S Wave”, “Liquefaction”, “Wave Amplitudes”, “Wave Speed” and “Run up” to detect patterns in the data?

024

Intelligent Risk - June 2018


using the Johari Window technique for liquidity risk management Does voluminous data provide risk management with sufficient clues to join the dots looking back? Does a combination of published, internal and information available in social media help unearth some of the unknowns? Early Warning Indicators (Arena) – Known to Self and Others Below is a typical set of EWIs that are publicly available. Organizations are encouraged to publish such information through public disclosures such as quarterly revenues, annual turnovers, shareholding patterns. Examples of such EWIs: 1. Organizational Share Price Indications 2. Public Available Ratings 3. Market Driven Indications (Curves and Key Rates) Characteristics of such EWIs: a. Easily available, quantifiable and accurately measurable b. A historical data-trend analysis can provide insight of lows and highs c. Outlier patterns can be analysed for historical, current and possibly future observations Early Warning Indicators (Blind Spot) – Not Known to Self But Known to Others These are a typical set of information which, when discovered, is known only to a closed community, until made public: 1. Negative publicity 2. Signals of downgrade ratings or negative watch 3. Emerging Behaviours from Employees. Characteristics of such EWIs: a. Emerge as qualitative information of adverse developments b. May appear hard to quantify, as not much information is known to self at the time of awareness c. Requires a feedback, solicitation and interpretation mechanism

Intelligent Risk - June 2018

025


Early Warning Indicators (Façade) – Known to Self But Not Known to Others These are types of information which is private until disclosed to others: 1. Confidential Information 2. Internal Audit Issues 3. Expected disclosures whose information is protected until disclosed on a periodic basis Characteristics of such EWIs: Often information is available within the information silos of an organization. Unless disclosed, this is bound to be within the confines of the guarded terrain. Early Warning Indicators (Unknown Unknowns) – Not Known to Self, Not Known to Others Either Examples of such EWIs: 1. Macro-economic behaviours 2. Market Behaviours Characteristics of such EWIs: Hard to predict, but can have relationships with remaining cognitive early warning indications from Arena, Façade and Blindspot.

026

Intelligent Risk - June 2018


conclusion With the availability of historical data, and the existence of correlation between the known indicators, recognition of emerging patterns is likely to assist in decoding the unknowns, hence assisting in the discovery of unknown risks.

author Jason Rodrigues Jason Rodrigues is a Risk and Finance Information Technology specialist at Tata Consultancy Services serving its banking customer, a large European Bank in Amsterdam, Netherlands. Being a resource of BFS Risk Management Practice, he has worked across financial services, treasury and liquidity risk management, asset liability management and business architecture. He participates in Hackathons to explore the possibilities of creating most valuable prototypes from initial ideas. www.skipads.eu has results of finding blind spots in social data. His special interests include blockchains, cryptocurrencies, and its application in financial industry. He is actively involved in the PRMIA Amsterdam chapter activities.

Intelligent Risk - June 2018

027


machine learning and AI in risk management

by Stuart Kozola Machine learning and artificial intelligence (AI) are megatrends impacting all industries. For risk managers, it is important to understand what these technologies are and why they should be part of your toolbox. Many believe that machine learning models are a black box and lack transparency, but this is not entirely true. You may not realize it, but you are already familiar with and using machine learning. The umbrella includes modeling techniques from your training in statistical modeling. In this article, we briefly describe the landscape, why machine learning and AI are megatrends, and how they are influencing the practice of risk management, particularly from the transparency perspective.

introduction Machine learning deals with the extraction of features from data to perform a variety of tasks, ranging from forecasting to classification to decision making. Artificial intelligence, as the term is widely used today, is the application of computational algorithms to automate decision making. Or more formally, AI is the theory and implementation of computer systems designed to perform tasks that have traditionally required human intelligence. When you think about that definition of machine learning, doesn’t it describe the process of developing models for risk management at a high level? This is a key point; machine learning’s goal is the same: to develop models from data, often very big data sets. Machine learning allows the computer to automatically identify the most important features, build a computational model for classifying the data, and generate predictions. Contrast that to the traditional approach in which the human model developer is responsible for both identifying important features and constructing a model.

028

Intelligent Risk - June 2018


Figure 1 illustrates the application areas and algorithms used across the machine learning landscape.

Figure 1. Common machine learning techniques in use today for characterizing and modeling multidimensional data. Š 1984–2018 The MathWorks, Inc.

Unsupervised learning is an approach for learning from data without a known input-output relationship. The goal is to classify the data, or extract relevant features from the data. Principle component analysis (PCA) is a tool you likely have used as a risk manager for factor analysis or dimensionality reduction. Supervised learning encompasses many modeling approaches based upon an input-output relationship, and regression is another familiar tool to you. In regression, you use a training data set to estimate the parameters of your regression model, which typically have an observable mathematical form. Classification trees also have an observable model structure that can be easily explained, as we will show in the next section. The other methods shown tend to be opaque: The structure of the model is complex, and the structure does not lend itself to an easy interpretation that maps input to outputs. Often the parameters and the model structure are interwoven, making it hard to understand the features that drive the model. For example, Figure 2 shows a common deep learning model used for image classification, the convolutional neural net (CNN). The learned features are embedded in the layers of the CNN and not easy to extract and make sense of without also understanding the CNN architecture.

Intelligent Risk - June 2018

029


Figure 2. Example deep neural network classification problem. © 1984–2018 The MathWorks, Inc.

This opaqueness is one of the complicating nuances behind the difficulty in performing model validation on machine learning models. In addition, the vocabulary of this model is finite (car, truck, …, bicycle). When a model is shown something it has never seen before, say an airplane, it will select from its known vocabulary to classify the airplane. In this case, it would likely categorize it as a car or truck with a low probability. Reinforcement Learning is a goal-directed computational approach to learning from interactions with a dynamic uncertain environment. It is concerned with enabling a computer to make a series of decisions to maximize future rewards without human interference and without being explicitly programmed how to achieve the task. Contrast this to the example above, where a CNN architecture was chosen a priori. In reinforcement learning, the model architecture is not predefined and allowed to adapt and change over time. Many of these algorithms and approaches have existed for decades, and the fields of AI and machine learning are not new. But everyone is adopting machine learning across the front, middle, and back office today to deliver new insights, automate repetitive workflows, and create predictive models.

what’s the big deal? why now? The abundance of data (big and alternative data sets), increased access to computational hardware through cloud computing, new computing architectures such as GPUs, and new approaches to architecting socalled deep learning1 algorithms have ushered in the current trend of AI. The computer can now be trained to complete tasks with a performance level that rivals that of humans. Take for example IBM’s Watson supercomputer, which implemented a DeepQA architecture that enabled it to beat humans in the game Jeopardy! in 20112. The IBM Watson was a proprietary supercomputer and at the time not easy to scale without substantial capital expense.

1 / On the Origins of Deep Learning, 3 March 2017, https://arxiv.org/pdf/1702.07800.pdf 2 / https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/ibm-watson-jeopardy-computer-shuts-down-humans

030

Intelligent Risk - June 2018


However, cheap access to computing through the cloud and the introduction of the GPU for machine learning have changed the dynamics and brought machine learning computing power to the masses. This was first demonstrated in 2015, when Google’s Deep Mind project used deep learning to train a computer to beat the world champion in the complex game of Go using 48 CPUs and 8 GPUs3. Today, a virtual machine with 64 CPUs with 8 GPUs can be rented from Amazon Web Services for around $25 per hour. In addition, Google released an open source framework named Tensorflow that stimulated collaboration across industry and academia to advance deep learning. These advancements spurred interest across many industries.

why should risk managers care? Machine learning is being rapidly adopted for financial modeling activities to improve efficiency and automate workflows, directly impacting the bottom line. However, there is concern about the pace of adoption and the uncertainty around how to integrate machine learning into model governance and supervision. An overview of these concerns can be found in the Financial Stability Board report published in 2017 highlighting the application of AI for customer-focused applications, operations use, trading/portfolio management, and regulatory compliance and supervision4. The report also highlights concerns of AI for financial stability and provides preliminary guidance for managing risk associated with AI. As an example of how machine learning models are creating better predictive models than more traditional techniques we present a nonlinear credit scoring application in Figure 3 comparing a traditional scorecard model to a neural network and a classification tree. This task here was to predict default (a 0 or 1 value), and is modeled as a classification problem. When looking at the receiver operating characteristic (ROC) curve we see the traditional scorecard model performs well when compared with a shallow neural network and a standard logistic regression. The best performing model was a classification tree, and not by a slim margin either. This type of improvement in predictive capability is why machine learning is getting so much attention. Of the models explored, the neural network is the hardest to explain to regulators since the learned features (the parameters) are not easy to map to explanatory features from the data. However, unlike a neural network, a classification tree can be viewed and explained. For example, zooming in shows the logic for the different branches for a scenario where an employed homeowner would be predicted to default (1 on the leaf node). Classification trees, with a known structure, lend themselves naturally to explanations. They are also easily explored for sensitivity to changes in parameters by directly examining specific regions of the tree. Thus, a classification tree’s dominant features and parameter sensitivities can be incorporated into risk reports for model reviewers to understand and judge the validity of the model results.

3 / https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf 4 / http://www.fsb.org/2017/11/artificial-intelligence-and-machine-learning-in-financial-service/

Intelligent Risk - June 2018

031


Figure 3. Multiple approaches to modeling credit card default data (bottom right). ROC curves (top right) measure the performance of a model. A perfect model would immediately step up to 1 and remain there as you move along the x-axis. The best model is a classification tree (top left); tree models can be viewed (bottom left) for understanding the logic behind the trees construction. Š 1984–2018 The MathWorks, Inc.

Machine learning algorithms will have to also fit within a model and data governance framework and undergo the same model and data validation process used today for more traditional statistical models. The tree model shown here is one model that naturally fits and is transparent. Others, such as neural networks, are challenging to validate today but not necessarily without value to the model development process. A FICO blog5 details how a hybrid modeling approach, in which machine learning is used to help drive sound modeling feature identification for traditional scorecard modeling approaches, enables better models that are explainable. As with any new technology, it will take time for risk managers and regulators to understand and develop guidelines for acceptable use of AI within a regulatory framework. In the meantime, risk managers should view machine learning as another tool in their modeling toolbox and an extension to their current datadriven modeling approach.

5 / http://www.fico.com/en/blogs/analytics-optimization/how-to-build-credit-risk-models-using-ai-and-machine-learning/

032

Intelligent Risk - June 2018


author Stuart Kozola Stuart Kozola leads product management for Computational Finance and FinTech at MathWorks. He is interested in the adoption of model-led technology in financial services, working with quants, modellers, developers and business stakeholders on understanding and changing their research to production workflows on the buy-side, sell-side, front office, middle office, insurance, and more. He holds the Financial Risk Manager designation from the Global Association of Risk Professionals and an MBA from Carnegie Mellon University.

Intelligent Risk - June 2018

033


the fall of Lehman Brothers by Xiao Wang,Xiaofan Hou, Junchao Liao & Tuershunjiang Ahemaitijiang On April 14th, 10 winning regional student teams participated in the final round of the PRMIA Risk Management Challenge, analyzing Lehman’s demise as the trigger for one of the largest shocks to the financial system. This initiative is part of the PRMIA effort to enhance the industry skills of future professionals and to help their future careers. We would like to acknowledge our sponsors that made this possible: CME Group, Scotiabank, TD Bank, RBC, Rotman Master of Financial Management, Corvinus University of Budapest, SunLife Financial, and Desjardins. Team Star from the University of Connecticut came out on top of the competition, winning $10,000 and fee waivers for the Professional Risk Manager (PRM™) Designation program. Their analysis and presentation is summarized in the article below. To many, the shock of Lehman Brothers’ bankruptcy on September 15th, 2008 is still deeply etched in their mind. Prior to the financial crisis people believed that financial tycoons are too big to fail, but soon the market witnessed the disgraceful downfall of the once aloft financial juggernaut. Thousands of jobs lost, millions of families affected, and trillions of losses occurred. It is a lesson bitterly learned.

what is to blame for Lehman’s failure? The prime culprit is the management team, especially CEO Fuld, who had too much authority in decision-making and ignored any dissent from other senior officers. This caused catastrophic result when his hubris attained new heights in 2006 to pursue an aggressive growth strategy focusing on the real estate market. Lehman’s leverage ratio increased sharply from 26.2x in 2006 to 30.7x in 2007, and its business model transferred from ‘moving’ to ‘storing’ inventory, which caused severe concentration problems, liquidity risk, and asset-liability mismatch. Emerged in the 1990s, the subprime mortgage market soon thrived and became a lucrative business. Lehman, anxious to catch up with its peers and have a large piece of the pie, threw money and people in with full speed, ignoring valuation and liquidity risks. The initial results were good, but as the market got tighter and the delinquency rates began to soar, a crisis started to brew.

warning: Lehman’s red flags Warning signs on the book appeared as early as 2003. Lehman had red flags in areas like cash flows, ratio analysis, peer analysis and model analysis, etc.

034

Intelligent Risk - June 2018


Its free cash flow to net income and quality of income had been going down for 5 years, signifying a struggle to make profits and left the company no choice but to use more external financing to offset its cash flow operating deficit and to finance its investments. And the situation was worsened by its large positions in long-term illiquid assets.

Its capital adequacy ratio dropped to 7.3% in 2007, lower than the minimum level of 8% from the Basel III recommendation indicating that the company’s financial situation continued to deteriorate and its access to capital markets was eroded.

The contribution of the fixed income department in total revenues had significant drops in 2007 for major investment banks except Goldman Sachs. Because fixed income was the biggest part in Lehman Brother’s revenue, this substantial drop was a really bad omen for the bank.

Intelligent Risk - June 2018

035


The CAMELS Rating model also indicated that Lehman’s overall financial condition deteriorated fast with scores from 66 in 2006, which dropped to 43 in 2007

cook books What did Lehman do with so many red flags on its book? Unfortunately, it didn’t get things on the right track but tried covering issues under the rug with accounting gimmicks, mainly the notorious repo 105 transactions. How does repo 105 work? Let’s assume a hypothetical $4bn repo105 transaction. The diagram of the transaction is shown below.

036

Intelligent Risk - June 2018


First, a US broker dealer exchanged $105 cash for $105 fixed income securities from a Wall Street counterparty. Then, LBIE received additional $5 cash from Lehman Brothers Holdings Inc and exchanged $105 cash for US broker dealer’s $105 fixed income securities via intercompany repo. Finally, LBIE exchanged these $105 fixed income securities for $100 cash from a European counterparty and recorded $5 Derivative assets under forward purchase agreement. The table below shows the effects on the B/S before and after a $4bn repo transaction in million dollars is as follows:

Based on the table, the $4bn repo 105 transaction reduces total assets and net assets by $4bn but does not influence total stockholders’ equity and tangible equity capital. It reduces leverage ratio by 0.15x to 30.55x and net leverage ratio by material 0.13x to 15.97x. To sum up, the repo 105 accounting manipulation can help Lehman to reduce balance sheet and net leverage ratio to satisfy credit rating agencies and investors.

how can a proper risk governance framework help? As the mortgage market started to turn and interest rates began to jeopardize asset values since 2007, all major investment banks used valuation tricks to cover their losses. The motivations behind this were the fear of loss in market confidence and the greed of front office, as their bonuses were heavily dependent on the value of deal flows. As the newly issued FAS 157 didn’t have many restrictions on Level 3 Assets’ modeling valuation, investment banks moved lots of assets, especially mortgage related assets from Level 1 and 2, to Level 3 and used unreasonable assumptions to calculate the value to avoid explicit shrinking in book value.

Intelligent Risk - June 2018

037


We compiled data from the bank’s 2007 10-K and reclassifiy and combine some accounts because of their different accounting standards. Then we calculated Level 3 Assets/Total Equity and got the following result:

Lehman (LEH) and Bear Stearns (BSC) had large concentrations in MBS and ABS, Morgan Stanley on corporate debt. Merrill Lynch on derivatives, while Goldman Sachs played a diversified strategy. We also made a rough but reasonable classification, because they didn’t disclose specific compositions of these accounts, but the results are consistent with what happened later: Bear Stearns was acquired in March 2018, Lehman failed in September 2018, and Merrill Lynch was acquired by BOA while GS and MS survived, albeit with lifelines from the Fed. This shows the catastrophic results misleading valuation models can bring. That’s why a risk governance framework is so important and necessary. By setting up transparent policy and procedures, responsibility and ownership is clear, and by having a proper valuation framework, models and the whole risk management process works effectively. A model risk governance framework can detect models’ drawbacks and potential fraudulent behaviors. Here is the model risk governance framework we designed for Lehman:

038

Intelligent Risk - June 2018


Below is the model validation and maintenance process:

the fork road Would the ending be different if Lehman had chosen another strategy? Say, in 2005, instead of expansion Lehman chose to reduce its balance sheet and get rid of the bulk of its toxic assets. Lehman may make a plan to sell those assets as its liquidity issues emerged. This strategy may not prove to be too profitable in the short run, but selling the illiquid assets would help to optimize the asset mix, reduce exposure to specific markets, and keep Lehman alive in the event of a crisis. It would have been a good counter cyclical strategy should Lehman have not over-reached for market share. But there is no free lunch on Wall Street. Lehman may fall behind its competitors in the short term as revenues & net profit may drop and clients may leave. Selling of such assets may shake investors’ confidence in the company which may lead to higher financing costs, more collateral requirements and even a lower credit rating. Assets sold may not be sold at fair value. Sales also face jurisdiction and compliance risk. But in all, it is worth a try to take all the risks. If Lehman was assertive enough to do so, the whole story may have been different.

authors Xiao Wang, Xiaofan Hou, Junchao Liao, Tuershunjiang Ahemaitijiang

Intelligent Risk - June 2018

039


portfolio allocation incorporating risk policies via deep learning

by Patrick Toolis abstract Recently, deep learning methods have gained favor as predictive models for such diverse applications as computer vision, natural language processing, and recommendation systems1. In the realm of investment, as well, neural networks have often been utilized to predict price, return, or direction of a particular instrument2. However, the natural layered structure of these models seems to lend itself not simply to a single trading decision but to more holistic portfolio optimization including risk management as well. Balancing factors such as single position size, overall value-at-risk, credit risk, and macroeconomic concerns, separate layers of the network can evolve from stock selection and directional signaling to an optimal portfolio which falls within the firm’s risk framework and regulatory capital allocation.

problem description Deep learning, in this exposition, will be defined to be an application of a neural network containing more than one computational layer. Here, a neural network is a software layout of threshold logic units (TLUs), analogous to logical units in silicon, which take inputs and weights and combine them utilizing the scalar product and other arithmetic operations. Layers can be connected by a matrix of forward and backward data flows, and each layer can be perceived as an abstract module. In the case of the sample portfolio optimization of this article, the layer roles are: per-security signal, persecurity risk, portfolio weights based on security data, normalization of these weights, refined portfolio weights based on overall risk limits and macroeconomic events, and a normalization of these portfolio weights. In essence, though, there are three basic steps: risk-adjusted direction of the security, portfolio placement of the security, and revised placement after adjusting for firm-wide controls. Portfolio optimization is a well-studied discipline dating back to Markowitz’s 1952 paper describing the mean-variance allocation3. More recently, approaches such as the Black-Litterman model allow subjective views and confidence metrics to permute the expected return and variance data to an arguably more realistic set of optimal weights4. However, these models do not account for the effects of exogenous constraints such as value-at-risk, liquidity, or regulatory capital on the portfolio weights for a given day (although the view vector might accommodate such factors implicitly). 040

Intelligent Risk - June 2018


A neural network model with progressively greater abstraction across layers should be well-suited to incorporate such concerns. Significant precedent in the use of neural networks for capital allocation across a portfolio exists. For example, Li (2016) discusses separate neural networks for risk-adjusted return of each asset5, Freitas et al (2017) use a network to model multiple stocks’ returns and derive risk and weights from this6, Angelini et al (2008) estimate credit risk via a neural net7, and Fernandez and Gomez (2007) utilize a Hopfield network to add cardinality and bounding constraints to a mean-variance allocation8.

model topology This model will differ from most general signal/alpha predictors, in that all stocks in the portfolio will have their weights calculated by separate computational units. Layer one will infer the signal (buy/sell/flat) for each security in the portfolio. Layer two will determine the volatility for individual securities (or some other per-security risk measure such as expected drawdown), outputting a risk-adjusted return for each. Note that, if the standard volatility is computed, this need not be a separate layer with weights to be trained. In the next layer, the covariance matrix among the securities will be considered, as will the possible hedging effects (offsetting positions in securities considered to overlap in some fashion), and a weight (-1 to 1) will be output for each symbol in the portfolio. This layer might perform a mean-variance type of computation for the weight generation, provided that either expected returns are input or used as the basis for the layer one signals. The fourth layer will normalize these weights, such that they sum to one (other approaches consider that the absolute values must sum to some value less than a threshold)9,10. In the subsequent layer, portfolio-level constraints such as the desk’s value-at risk, tier I capital, counterparty credit risk, and liquidity or global macroeconomic anomalies are incorporated. Obviously, the inputs of each asset’s computational neuron must include quantities, such as portfolio VaR, which are aggregated over the entire portfolio. Thus, this layer will essentially be a logic block which determines some attributes of the portfolio (also the desk/ firm) and forwards them to the per-asset weight computation, inducing them to output optimal weights which include the higher-level constraints. Finally, layer six includes an additional normalization step to ensure these new weights sum to one. Note that the normalization layers are not true neural network layers, as the inputs are unweighted.

example The diagram which follows in three parts is a sample portfolio model based on three assets: wheat futures, mortgage-backed securities (MBS), and equity from the energy sector (i.e. oil or natural gas producer stocks). Note that the weights passed between the logic blocks are labeled both by the layer they were output by and the asset (e.g. w6wheat for the wheat futures weight output by the final layer). Note also that computations such as hedge offsets and portfolio VaR are more involved than a single node in a neural net traditionally allows and would likely be invoked via remote web service calls. Nonetheless, they serve as input coordinates for the neurons in their respective layers. In the diagrams, all TLU inputs are assumed to have weight coefficients (here, the meaning of weight is distinct from that of portfolio weight) which are not shown and learned to determine the model.

Intelligent Risk - June 2018

041


Figure 1: Layers 1 and 2 of the network, where the directional signal and risk are inferred. Note that the weight coefficients for threshold logic unit inputs are excluded.

Figure 2: Layers three and four. The hedging offsets in layer three likely are not salient in this example, as wheat, MBS, and energy assets are generally not considered to be highly correlated.

042

Intelligent Risk - June 2018


Figure 3: The final two layers. General VaR and macroeconomic constraints are imposed on the portfolio in layer 5, and the weights are re-normalized in layer 6.

conclusion While no attempts to implement the network described above have yet been made, the author hopes to integrate such a model into commodity futures, currency futures, and MBS software for which the portfolio optimization component has yet to be written. The training data sets and training algorithms has not been explicated, but the preprocessing of price, risk, and macroeconomic data will be critical to the network’s operation. Given such a defined training methodology, the combination of technical and fundamental signals, hedging factors, VaR policy, and event sensitivity should make this type of optimization module a risk-management utility which merits further investigation.

bibliography 1. Wikimedia Foundation. “Deep Learning”. Wikipedia, the Free Encyclopedia. en.wikipedia.org/wiki/ Deep_learning. Accessed 30 Apr. 2018. 2. Rastorguev, Dmitry. “2017’s Deep Learning Papers on Investing”. ITNEXT. itnext.io/2017s-deeplearning-papers-on-investing-7489e8f59487. Accessed 30 Apr. 2018. 3. Markowitz, Harry. “Portfolio Selection”. The Journal of Finance Vol 7, No 1. March, 1952. Pp 77-91. 4. Black, Fischer and R. Litterman, “Global Portfolio Optimization.” Financial Analysts Journal, Sept.-Oct. 1992. pp 28-43. Intelligent Risk - June 2018

043


5. Li, Jianjun. “Portfolio Optimization Using Neural Networks.” United States Patent Application Publication. 28 Jul 2016. 6. Freitas, Fabio; De Souza, Alberto; and R. de Almeida, Ailson. “Prediction-Based Portfolio Optimization using Neural Networks. Neurocomputing. Vol 72. Issue 10-12. June 2019. 7. Angelini, Eliana; di Tollo, Giacomo; and Roli Andrea. “A Neural Network Approach for Credit Risk Evaluation”. The Quarterly Review of Economics and Finance. 48 (2008). 733-755. 8. Fernandez, Alberto and Sergio Gomez. “Portfolio Selection using Neural Networks”. Computers and Operations Research. 34 (2007). 1177-1191. 9. Sanderson, Scott. “Optimize API Now Available in Algorithms.” Quantopian, Inc. www.quantopian. com/posts/new-tool-for-quants-the-quantopian-risk-model Accessed 30 Apr. 2018. 10. Wikimedia Foundation. “Modern Portfolio Theory”. Wikipedia, the Free Encyclopedia. en.wikipedia. org/wiki/Modern_portfolio_theory. Accessed 30 Apr. 2018.

author Patrick Toolis Patrick Toolis has spent seventeen years involved in theoretical computer science applications in five countries. After co-founding J-Surplus.com, one of the first businessto-business e-commerce auction sites for excess inventory in Japan, Mr. Toolis worked in the system integration realm at Iona Technologies, dealing with major customers in telecommunications and semiconductors. He then helped develop the order and execution processing at JapanCross Securities, one of the first electronic crossing networks for Japanese equities (later merged with Instinet, which is now a part of Nomura Holdings). As a consultant, he wrote significant machine learning implementations on mobile computing platforms for Sears Holdings Corporation. At The American Express Company, he designed parallel algorithms for risk mitigation and decision optimization. Most recently, he has focused on independent financial engineering work which he aspires to grow into a hedge fund, software vendor, and liquidity pool. Mr. Toolis holds BS and MS degrees in Computer Science from Stanford University.

044

Intelligent Risk - June 2018


managing business traveler cyber risk

by Moh CissĂŠ

MBA, PMP, CISM, CISA, CRISC, ITIL, CFOT, CFOS/O

Few executives consider cyber security a priority while traveling on business. Yet, if they are targeted by a sophisticated cyber attack like an infection, wiretapping attack, interception, substitution, or session replay while on a business trip, the risk to their company can be alarmingly high. This applies particularly to executives of industrial companies which use automated industrial control systems (ICS), often referred to as SCADA (Supervisory Control and Data Acquisition) systems. Proper control of traveler cyber risks is an essential element of holistic cybersecurity risk management. When first introduced, ICS/SCADA environments were typically isolated from internet-based (TCP/IP) networks, instead using proprietary equipment and protocols, and thus considered immune, or less exposed, to cyber attacks. However, the increasing transition of SCADA environments to an open TCP/ IP technology allows sophisticated hackers to take advantage of the relative lack of cyber security awareness in the world of operational technology. While damage caused by TCP/IP technology network cyber attacks is primarily financial or reputational, cyber attacks on SCADA technology networks also put human lives at risk. Observations of known attacks or attempted attacks indicate a significant growth in development of malicious programs and organized attacks against energy production storage and distribution environments.

I. Risk identification a. Network risks SCADA/ICS environments in the energy, water, electricity, and gas sectors are ideal targets for cybercriminals. The main activities targeted are, among others: gas pressure monitoring, temperature, air flow and power supply interruption. To protect SCADA/ICS networks there are several measures to take: 1. Establish a risk management framework corresponding to the inherent SCADA system 2. Eliminate the risk of managing multiple border devices, such as firewalls and routers, by centralising port management 3. Deploy a protection framework against vulnerabilities 4. Implement advanced cyber protection measures 5. Deploy a SCADA zone access protection mechanism

Intelligent Risk - June 2018

045


b. Employee function risk Messengers, engineers, CEOs and information security architects do not have the same data access levels or the same data access privileges within the company, and therefore do not have the same level of interest for hackers.

c. Country risks Each country has a unique risk profile. Recommendations should be made following a risk assessment of each country. This evaluation is based on the threat level of each country with regard to human safety exclusively.

II. Risk Assessment Model a. Risk and threat assessment methodology This risk assessment model proposes an approach that combines the risks associated with each country, type of equipment and function. The risk formula is defined as: Risk = Criticality * Impact or (Ri = Ci*Ii(Ci))

b. Applying the formula to the traveler risk model The choice of appropriate IT solutions to mitigate security risks during business trips must be subject to risk assessment. This risk assessment model is a three-dimensional approach: Traveler’s Function (FT), Traveler’s Destination or Country (CT), and Traveler Required Tools (TT): RT = F (FT – CT – TT) or RT = F [RFC, R(FC,T)]

c. Risk Assessment Function/Country Functions classifications are based on the criticality of traveler’s role within the company and his/her access privileges. Risks associated to functions are classified as follows:

Following the same approach countries risk of destination are assessed and classified as follow:

046

Intelligent Risk - June 2018


Risk Function/Country: RFC = F (FT, CT)

c. Function/country risk assessment tool Tools available are sources of various levels of risk depending on the sensitivity of data they may contain:

The Function/Country Risk Assessment Matrix combines the risks associated with Function, Country and Tools requested. The IT solutions provided should be based on this risk assessment: Risk Travel: RT = F (FCT, TT) • RT: RiskTraveler • FCT: Risk (Function,Country)Traveler • TT: Risk (Tool)Traveler

This matrix provides a possibility of 16 different risks levels grouped in 4 risk levels:

Intelligent Risk - June 2018

047


e. Traveler risk interpretation This risk combination determines the traveler’s eligibility to receive an IT solution from the available equipment pool: • Laptop • Disk-less Laptop • Tablet • Smart Phone • Encrypted USB key • Copy of encrypted folders to external hard drive • VPN connection Key (SecurID). Security services recommended are • Disable or Enable WiFi • Virtual Office Access in RDP • Hard disk encryption with pre-boot • Secure Exchange files • Blocking Application Addition on tablet & smartphones • Internet access with smartphone as Access Point • Boot from USB key (Win to go) • Secure SMS exchange (ex: iMessage, Signal) • DiliTrust

048

Intelligent Risk - June 2018


III. Mobility Management Process Technology alone cannot solve security issues associated with professional mobility. It is therefore necessary to combine three entities for a successful achievement of any business objective: Human, Technology and Process.

a. Risk and threat assessment methodology Resources involved in the mobility process execution are: • Travelers • Physical security expert • Cyber security and risk assessment expert • Tools configuration expert • Traveler kit assembly coordinator • 24/7 technical support agent • Incident management expert • Forensic expert

b. Sub-Processes The overall process is subdivided into new sub-processes: • Trip planning • Country risk assessment (physical security) • Cyber security risk assessment • Assessment and assembly of IT solution • Traveler’s preparation and training • Traveler’s monitoring and support • Traveler return

Intelligent Risk - June 2018

049


c. Traveler mobility process management This process ensures the synchronization of the different sub-processes. To track start-toend information, a Traveler’s Card is created when the trip is initiated. This card should be completed and used by experts from the beginning to the return of each traveler.

IV. Conclusion Hacking of a principal gas distribution valve in a city downtown can have major economic and life consequences. Creating a blackout by attacking an energy production, storage and distribution plant can have a dramatic impact on hospitals or road traffic. A nuclear power plant exploding following cyber terrorist attack could create another Chernobyl or Fukushima with the dire consequences which already occurred years ago. These scenarios were seen as pure fiction a few decades ago, but are now real scenarios envisaged by cybersecurity risk management teams operating in SCADA/ICS networks. Therefore, no controls or measures undertaken to mitigate these risks should be considered excessive or superfluous. Although the security measures described above may seem burdensome and/or overkill to some travelers, they are important to ensure the security of industrial assets when a business traveler has access to a SCADA network.

author Moh Cissé Moh Cissé, MBA, CISM, CISA, CRISC, ITIL, CFOT, CFOS/O, is a GRC senior consultant at Hydro-Québec Canada Montreal office. MOH has 21 years of in-house IT risk Management experience and regularly counsels SCADA/ICS companies and IT giants like Bell Canada or CGI Inc. MOH is an MBA from Laval University Canada and is certified cybersecurity advisor.

050

Intelligent Risk - June 2018


academics probe RegTech ERM solutions

by Peter Hughes Academics are leading research into ‘Risk Accounting’ which addresses the long-standing issue of how to construct a common measurement-based ERM reporting framework in the form of RegTech software. In a recently published research paper, researchers at the Durham University Business School (DUBS) report on their plans for further testing aimed at validating the newly defined non-financial risk accounting metric – the Risk Unit or RU. The testing will be supported by RegTech prototype software that uses algorithms to convert atomic operating and risk data into RUs that can then be aggregated to produce near real-time reporting of accumulating cross-enterprise exposures to risk.

the portfolio view In its much-awaited landmark paper ‘Enterprise Risk Management - Aligning Risk with Strategy and Performance’ (2017), COSO promotes a ‘portfolio view’ of enterprise risks. This presents risk managers with a conundrum. The intuitive understanding of ‘portfolio’ in the financial sense is a composite of like exposures where the portfolio’s content is both known and verifiable. Data elements are assigned standard identification codes that predetermine data aggregation paths, and monetary values represent exposure. Together, identification codes and monetary values are processed in systems to enable risk governance, oversight and audit to be exercised at both the granular and aggregate levels. When risk-assessing financial risks in investment, credit and trading portfolios, risk analysts can indulge their modeling and analytical skills to great effect through trending, ranking, limit-setting, limitmonitoring and benchmarking. However, it’s hard to see how a portfolio view of enterprise risks can emerge in the absence of data identification standards and a common additive metric that together validly express the diverse financial and non-financial risks of whole enterprises. To state the obvious, you can’t aggregate or audit color-coded risks.

the ERM conundrum It’s not only COSO’s ERM framework that presents firms with this conundrum. The demand for ERM to be applied in the aggregate can be found in a number of authoritative papers.

Intelligent Risk - June 2018

051


In its ‘Principles for an Effective Risk Appetite Framework’ (2013) the Financial Stability Board stated that banks should, “include quantitative measures that can be translated into risk limits applicable to business lines and legal entities as relevant, and at group level, which in turn can be aggregated and disaggregated to enable measurement of the risk profile against risk appetite and risk capacity.” In 2014 the US Treasury Department’s Office of the Comptroller of the Currency (OCC) adopted heightened standards for the design and implementation of a risk governance framework for large banks. A primary focus of these standards is the role of independent risk management, defined as “any organizational unit within the bank that has responsibility for identifying, measuring, monitoring or controlling aggregate risks.” The Basel Committee’s BCBS 239 ‘Principles for Effective Risk Data Aggregation and Risk Reporting’ (2013) stated that banks “should be able to capture and aggregate all material risk data across the banking group.” From the above it is evident that banks are encouraged, even mandated, to apply risk controls to aggregate enterprise risks even though it is universally accepted that there is no available method or system that can do this. Arguably, regulators compensate for such risk control shortcomings by imposing ever-increasing amounts of costly protective capital to buffer unexpected losses.

planned research It is questionable whether ERM can be effectively exercised without first constituting a firm’s portfolio of controlled and audited enterprise risks. This is the essential foundation that provides the basis for risk control, public disclosure and the application of advanced analytical techniques. This is the focus of ongoing research at the Durham University Business School (DUBS) where researchers have tooled a RegTech ERM prototype solution that uses the risk accounting method to provide a portfolio view of enterprise risks. The prototype leverages identification standards and associated aggregation paths already available in accounting systems and responds to the absence of a common additive non-financial ERM metric by creating one called a Risk Unit or RU. In a research paper published in the Journal of Risk Management in Financial Institutions titled ‘A Test of the Feasibility of a Common Risk Accounting Metric for Enterprise Risks’ DUBS describes the methodology formulated to test the feasibility of the risk accounting method and its proposed common risk metric, the RU. The approach involves defining multiple operating risk scenarios and downloading publicly available US GAAP bank call report data provided by the Federal Reserve Bank of Chicago. The combined risk and operating data from the above sources will be processed by the prototype RegTech software to test its ability to produce near real-time comprehensive and comparable enterprise risk reporting in RUs at both granular and aggregated levels. The feasibility of the RU to validly express all forms of enterprise risks and its inherent predictive properties will be tested by examining trended data in RUs in the period leading up to and beyond the global financial crisis of 2007/08 and analyzing the correlations between RUs and unexpected losses.

052

Intelligent Risk - June 2018


For this purpose, the researchers plan to use the amounts applied by the US government to purchase equity from banks under its Capital Purchase Program (CPP) as part of the Troubled Assets Relief Program (TARP) as proxies for unexpected losses. DUBS is keen to progress this work on a collaborative basis. Accordingly, we welcome interest from academia, financial institutions, regulators and practitioners.

author Peter Hughes Peter Hughes is a visiting fellow and member of the advisory board of Durham University Business School’s Centre for Banking, Institutions and Development (CBID) where he leads research into next generation risk and accounting systems. He is a life time fellow of the Institute of Chartered Accountants in England & Wales. In his corporate career, Peter was a banker with JPMorgan Chase where he held senior positions in finance, operations, risk management and audit.

Intelligent Risk - June 2018

053


risk platform transformation in the digital era

by Peter Plochan introduction Over the past few years, banks around the globe have been swamped with increasing regulatory scrutiny and complexity. In addition, the environment of low interest rates combined with the emergence of the digital & FinTech era has put these banks into a difficult situation. They now have to balance the need for more returns, lower costs with the push for more governance, as well as compliance, while at the same time aiming to become more agile to flexibly respond to new regulatory & market trends. Therefore, it does not come as a surprise that major banks around the world have announced their plans for large transformation programs with the focus on consolidation of their back-office processes in order to reduce the costs and improve efficiency. Transformation of the risk function & application landscape is often the common theme in these programs.

risk and finance efficiency tackled together The topic of Risk and Finance integration has been on the table for quite some time already. However, given the current challenging environment, this topic is attracting increased attention. It is not uncommon that over the years banks have addressed each of the emerging risk / finance regulations & challenges with tactical patches and in individual silos. In parallel with each new acquisition, the new entity data & systems have too often been quickly glued to the main systems and patched together. As a result, isolated and duplicate data & processes and capabilities have been established within risk and finance space. As the number of these independent silos grew over time, so did the complexity and reconciliation effort. The new Pillar 2 & Stress testing regulatory push1 is putting all these tactical patches to the test and many banks are finding themselves lost in the data & system spaghetti they have built up over the years. Both these new regulatory initiatives and market trends are asking for much faster, more unified and more agile response than what banks have been used to till recently. A typical example (Exhibit 1) of overlapping & duplicate processes and capabilities present to a certain degree at most of banks are the Balance Sheet and P&L forecasting processes that are often performed in silos both on the finance and on the risk side. 1 / European Banking Authority: Strengthening of Pillar 2 framework

054

Intelligent Risk - June 2018


Significant synergies of scale and efficiency gains could be achieved if these are tackled together in one central risk and finance forecasting hub. This integrated platform can be further extended to service a variety of other risk and finance processes. Recently a leading Dutch bank has launched a major risk and finance integration project where this central forecasting hub forms a key component alongside other integration areas within the consolidated risk and finance technology platform.

Exhibit 1: Central Risk and Finance Forecasting & Scenario Analysis Hub

the enabling role of technology While a number of these large transformation projects might be initially focusing on the efficiency of the underlying technology landscape, there are significant benefits that can be achieved on the business side as well (Exhibit 2 below). On the business side, the integrated data & technology platform enables to promote digitalization, increase speed, governance and promote automation, while reducing the need for the too common and too manual handshakes between various applications and any reconciliation efforts, thus vastly increasing the efficiency of the risk and finance processes. Exhibit 2: The dual role of technology as the Risk and Finance efficiency enabler

On the IT side, banks are aiming to consolidate the fragmented data & application landscape by decreasing the amount of duplicate data and number of overlapping applications & capabilities. By reducing the number of distinct vendor solutions to a few key vendors only, they are able to increase integration capabilities and achieve licensing economies of scale, while at the same time they can also leverage the automation and high-performance capabilities available in the latest application suites.

Intelligent Risk - June 2018

055


Furthermore, there is an additional side effect of the above, a significant improvement of the institution’s compliance position.

the key technology capabilities for success While decreasing the number of independent applications is the key for achieving the consolidation benefits, it is also vital that the new application landscape comes with central capabilities that can serve various risk and finance processes. These shared capabilities (Exhibit 3 below) of the consolidated risk and finance platform can promote reusing and sharing of the built content and eliminate the need for duplicate technology features. Having one way of working for each of these key capability areas is vital for elimination and prevention of any future siloed approaches, and it helps to reduce the need for any reconciliations. Exhibit 3: The central capabilities of the Consolidated Risk and Finance Platform

conclusions As the number of banks around the world are struggling and rethinking their business models and competitiveness strategy, an opportunity emerges to clean up and reboot their fragmented risk and finance application landscape and improve the agility and efficiency desperately needed in the current market. The potential benefits are clear, the total cost of ownership of the application landscape being the key but not the only one; however, the road to success will be neither a simple one nor a short one. Transformation initiatives like these could easily become one of the most complex projects that banks have undertaken till now and can easily take 5+ years to fully accomplish. However, this does not have to happen in a big bang. By taking a careful & agile step-by-step approach (Exhibit 4), banks can gradually roll out this unified vision and achieve the benefits over time as they move ahead. 056

Intelligent Risk - June 2018


Exhibit 4: The 5 stages of Risk & Finance Maturity and Efficiency

All in all, following the case of the leading Dutch bank from the example mentioned above, we will see in the coming years more and more banks around the globe embarking on a similar ambitious and, to a certain degree, also a revolutionary journey. However, as it is too often the case with large transformation projects, we will see a number of banks failing along the way as they underestimated the effort or took wrong decisions along the way, but those banks that succeed will emerge transformed and ready to operate and compete in the new digital era.

author Peter Plochan Peter Plochan is Senior Risk & Finance Specialist at SAS Institute assisting institutions in dealing with their challenges around finance and risk regulations, enterprise risk management, risk governance, risk analysis and modelling. Peter has a finance background (Master’s degree in Banking) and is a certified Financial Risk Manager (FRM) with 10 years of experience in risk management in the financial sector. He has assisted various banking and insurance institutions with large-scale risk management implementations (Basel II, Solvency II) while working internally and also externally as a risk management advisor (PwC).

2 / ABN AMRO chooses to Integrate Finance, Risk and Reporting to Meet Regulatory Compliance and Improve Business Decisions

Intelligent Risk - June 2018

057


spreadsheet risk case studies. why should the CRO care?

by Craig Hattabaugh If you are wondering why a CRO with strategic responsibilities should even think about something so seemingly tactical as spreadsheet risk, then read further. Recent (March 2018) events at Conviviality Plc (LON: CVR), shows a timeline with the CEO resigning just 13 days after her CFO became aware of a material error in a forecast spreadsheet. Soon after that, the company filed for protection. This unfortunate series of events was not attributable solely to that spreadsheet error. But it was material, and it was the catalyst for increased scrutiny that discouraged existing lenders and potential investors. The two takeaways are:

1. Material weaknesses in controls can cost you your job. If the CEO job is not safe, then neither is yours. 2. Spreadsheets (and other end-user controlled computing applications) typically have weak (if any) controls. People caused the error, but risk technology could have minimized the likelihood.

what is the core issue amplifying EUC risk? The short answer is weak, ineffective controls for end-user computing applications. In March 2018, VBS Bank was put under curatorship by the South African Reserve Bank due to large financial losses. These were alleged to involve fraudulent manipulation of spreadsheets in critical financial processes. In April 2018, Samsung Securities lost over $300 million of market capitalization and one of their largest pension customers. A human error exposed the weakness of their controls as the potential $120 Billion (yes, billion) impact became public. At Conviviality, a series of acquisitions necessitated the short-term use of spreadsheets to facilitate financial reporting. In their high growth culture, its fair to assume the controls on those spreadsheets were minimal, if any. In the end, humans are fallible. Operational error and fraud can take many forms and have different causes. But through it all, effective controls are your primary defense and risk technology can help you apply and enforce them.

how does spreadsheet risk manifest itself? Spreadsheets and other end-user computing applications are not managed by IT. They are unstructured and lack many of the controls applied to accounting and other enterprise systems. The line of business relies on them as the speed/agility of using such tools are key to innovation and competitive advantage. Given that Excel is ubiquitous and assuming there have not been any problems in your company, what is the likelihood of a material error?

058

Intelligent Risk - June 2018


Empirical data suggests that 99% of your spreadsheets can have errors, or become unavailable for use, and it will not matter. But when those files are part of a critical process, and a financial report or model is dependent on them your risk skyrockets. Every business has spreadsheets embedded in critical processes. Some more than others, but no business is immune. The critical question is not “Do I have this risk?” but “Where is it, who is accountable for managing it, and who is overseeing the risk?

what can be done to improve EUC (end-user computing) controls? The first step is to take spreadsheet/EUC risk seriously and inquire with your direct reports and your business partners. Question both. Many firms have written EUC policies/standards that gather dust with potentially no one accountable. Recognize that having a policy can provide a false sense of security. It will probably help in discussion with a regulator/auditor. It can buy time needed to address their concern(s). It will not protect anyone when there is a material error. As the Conviviality case illustrates, events spiral out of control very quickly. Typically, the first line owns this risk, and the second line owns making the credible challenge to their risk management efforts. You need to ensure that neither party is just “ticking the box.” The second step is to identify where the specific, high risks lie and who owns them. What critical processes are dependent on an end-user controlled asset? Most senior executives will not be aware of this dependency, but someone should. For example it is common practice to extract data from the corporate database, transform it within Excel and then feed it into a model. This is a potentially weak link with high risk. Without effective controls, the organization is vulnerable even though the data appears to be coming from the proverbial “one source of truth.” Given the sheer volume of spreadsheets in an organization, risk identification can seem daunting. However, there are tried and true processes (along with scanning technology) to help accomplish the task. Knowing where this risk is and who is accountable is well over half the battle.

conclusion In your role, you may not be interested in what risk technology is used to get this done, nor the details of how the risks will be identified and ultimately mitigated. However, recognize that a single key stroke error can be material, and the impact will typically be immediate. At that point, then you and your organization will be affected. Historically the value proposition for using risk technology to improve end user computing controls was focused on regulatory compliance. I would argue that the downside of error or data loss is far more severe than the negative repercussions of failing an audit. Companies often have years to correct deficiencies, and that assumes the regulators or auditors have even found a problem.

Intelligent Risk - June 2018

059


It is much more likely that an error is what will push you into the harsh limelight -- errors in critical calculations, data feeds, forecast models, etc. (the potential for fraud is real but much less common). Going forward, the best remedy is not eliminating spreadsheets. The prudent path is paved with better EUC controls. Replacing a complex, functioning spreadsheet application can take months with a monetary cost of hundreds of thousands and result in a business process that may not improve. Perhaps equally important, in the eyes of the business, their agility and independence are now gone. So in conclusion, the answer is better controls. Effective EUC controls are far and away the highest return on your risktech investment.

author Craig Hattabaugh Craig Hattabaugh is a proven executive who specializes in accelerating revenue generation in high growth, technology-based companies. During his career Craig has served as a founding sales executive, global head of sales and advisory board member in start ups as well as F500 companies. Currently, Craig is CEO of CIMCON Software Corp. Previously, Craig was SVP of Global Sales and Operations for the Analytics business of TIBCO Software Inc. He has led sales for two start up companies; Open Ratings and Formation Systems. Before that he was Deputy Managing Director and head of European Sales for Aspen Technology. He has also held management positions at Texas Instruments Semiconductor Group and currently serves on the Advisory Board for Cogito. Craig earned his MBA from Boston University, his undergraduate degree in Electrical Engineering from Worcester Polytechnic Institute and has completed executive education in entrepreneurship at the MIT Sloan School of Management.

060

Intelligent Risk - June 2018


bringing technology trends into banks

by Alex Marinov traditional banking processes Banks and technology have been going hand in hand ever since the industrial revolution propelled growth, opportunities and new horizons for millions of people. Almost every big project since then has been partly, if not exclusively, financed by banks, either state-owned or privately-owned. Why is that the case? Banks have a very attractive structure for these types of investments. They collect deposits from regular people and lend the collected funds out as loans to businesses and entrepreneurs, while taking into account a small charge - an interest margin. This traditional business model required banks to set up physical offices in cities, stacked with safes, staff and security, in order to operate. That obviously was very cumbersome, cost intensive and required a great deal of business acumen, because some locations might turn out to be not so profitable and would have to be closed down.

fallen behind For the last 50 years, the pace of technology has helped banks provide even better and more efficient solutions to their clients, such as ATM’s, online banking, credit cards, PayWave and others. However, in recent years the pace of banking innovation has slowed down, even though there have been numerous challenges in terms of how banks operate. Cryptocurrencies, internet-only banks, blockchain and hackers to name just a few, have come out to challenge the traditional banking business model. The issue with banks is not that they don’t want to innovate, but that banking processes are too cumbersome and cost intensive. Most modern banks were created via mergers during the last century, where numerous small banks were acquired/bought out and consolidated into bigger financial institutions: consolidated but never fully integrated, because each bank had its own specific IT systems, processes, and procedures. Creating a whole new system after every merger would have been too expensive and time consuming (according to senior managers), so they just left the systems as they were, without taking into account the fact that this fragmented state makes them slower to react in times of rapid innovation. That issue has now been brought into the spotlight, with the pace of technological change we have come to experience since the late 90s, early 2000s and continuing today.

Intelligent Risk - June 2018

061


The internet was launched in 1991 and in the short amount of time since then has completely revolutionized banking operations on a massive scale. The volumes of international transactions are ever increasing, but most banking processes, though automated, rely on outdated technology, which is prone to a high level of failures costing millions of revenue. The numerous systems employed make it easier for hackers to break in undetected and scour for any if not all vital sources of information, which could be used for illegal purposes such as account funds being stolen, data theft and/or compromising further the bank’s systems, which makes them even more prone to losing the confidence of their customers. These challenges have caught banks by surprise for several reasons, of which the most important ones are legacy systems architecture and not enough investment in new technology. Senior managers are very reluctant to replace legacy architecture, as they see it as an already functioning system. However, what they fail to see is that data becomes more fragmented due to the numerous manual processes, the old systems require a lot of man hours and support to function and, given the ever increasing demands for data, processing the information takes hours to execute even for the simplest of adjustments. Combine this with a labyrinth-like IT architecture and you have your biggest data integration nightmare. One system feeding two more, which then calculate under a third and feed to a fourth system is a very inefficient and time-consuming process. Some companies have tried to address these issues by creating a centralized environment, where the data flows from all of the systems, usually referred to as a data dump. However, that process is not a remedy for all of the issues, due to the fact that even current systems have problems when it comes to functionality, data handling, missing outputs, incorrect feeds and others.

the way forward So how should banks embrace technology and move forward? Firstly, by defining a clear-cut data flow architecture that doesn’t look at the current models but aims to create a structure that would be able to function in the future. Past solutions won’t be able to handle the vast amounts of information that are required not only for clients’ everyday banking needs, but also for regulatory purposes, given that since 2008 there has been a wave of new regulatory frameworks - BCBS 239, FRTB, MiFiD II, IFRS 9 and others. Secondly, allowing information to be processed in a timely and efficient manner requires the establishment of robust data management frameworks, which dictate who is responsible for what, how emerging issues should be addressed and how resolution protocols should be triggered. Thirdly, by making sure that the right culture is in place, irrespective of current management structure, where issues are not swept under the rug or lie dormant for years, while the infrastructure crumbles and chips away at efficiency and timely data processing. Banks seem to be too stuck in the past. By being so, they are harming their image as a dynamic and innovative sector, where programmers and IT professionals can develop viable and reliable systems that help not only their customers but also their employees by increasing their value-added services.

062

Intelligent Risk - June 2018


conclusion Finally, to address these challenges banks require strong foresight, firm decision-making and determination, as the results will take years to come into effect, but only the most persistent ones will succeed in an everchanging future. This can be achieved by providing more freedom to developers and other key stakeholders to think beyond current frameworks and build systems for the future which are scalable, reliable and produce information in a timely manner. Another area which can be explored is to bring together representatives from different teams who can discuss freely the deficiencies of the current systems. Listing what the systems do well and where they fall short would be of great help to project managers and developers, enabling them to create more robust, time dependent and scalable systems that last much longer.

author Alexander Marinov Alexander Marinov is a Market Risk Associate at Barclays Investment Bank. Mr. Marinov has been working in the financial services industry since 2013. Prior to joining Barclays he worked at BNY Mellon. Mr. Marinov has a MSc in Economics and International Financial Economics from the University of Warwick and Bachelor’s in Economic and Social Studies from the University of Manchester. He is a PRM holder since 2015.

Intelligent Risk - June 2018

063


calendar of events Please join us for an upcoming training course, regional event, or chapter event, offered in locations around the world or virtually for your convenience.

PRM™ SCHEDULING WINDOW June 23 – September 14

FRTB’S INTERNAL MODEL APPROACH June 27 – Virtual Training

IMPROVING ORGANIZATIONAL CYBER RESILIENCE THROUGH ENGAGEMENT June 28 - Webinar

PRMIA TORONTO SOCIAL June 28 in Toronto

DEFAULT RISK CHARGE UNDER FRTB July 11 – Virtual Training

JULY SUMMER ROOFTOP SOCIAL July 11 in New York

FRTB & IMA’S MOST PERNICIOUS CHALLENGES, PART I July 18 – Virtual Training

FRTB & IMA’S MOST PERNICIOUS CHALLENGES, PART II July 25 – Virtual Training

064

Intelligent Risk - June 2018


FRTB & NEW REGIME FOR MODEL GOVERNANCE & RISK TRANSPARENCY August 1 – Virtual Training

FRTB IMPLEMENTATION TIMELINES AND MILESTONES August 8 – Virtual Training

PRM TESTING WINDOW August 20 – September 14

MARKET RISK MANAGEMENT UNDER BASEL III: SUPERVISORY IMPLEMENTATION August 29 – Virtual Training

BANKING DISRUPTED CONFERENCE September 12 – 13 in San Francisco

CANADIAN RISK FORUM November 12 – 14 in Toronto

EMEA RISK LEADER SUMMIT November 14 – 15 in London

Intelligent Risk - June 2018

065


INTELLIGENT RISK knowledge for the PRMIA community ©2018 - All Rights Reserved Professional Risk Managers’ International Association


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.