Intelligent Risk - November 2023

Page 1

INTELLIGENT RISK

knowledge for the PRMIA community

November 2023 ©2023 - All Rights Reserved Professional Risk Managers’ International Association


PROFESSIONAL RISK MANAGERS’ INTERNATIONAL ASSOCIATION CONTENT EDITORS

INSIDE THIS ISSUE

Carl Densem

03

Editor introduction

04

The power of A.I. tools in climate disclosure by David Carlin

13

The evolving CISO role: a technology leader’s perspective by Jon RG Shende

18

Critical insights and surprising poll results shared at PRMIA Risk Leader Summit - by Bruce Fletcher

21

Opportunities and challenges in applying AI to risk management - by Malcolm Gloyer

26

Future impact of cyber risk on crypto currencies by Dr. Sanjay Rout

30

Learnings from interconnected risks to prepare for climate risk - by Sonjai Kumar

34

Incorporation of data risk in the banking risk taxonomy by Mayank Goel

39

The essence of cyber governance: biggest questions for board members - by Ming-Chang (Bright) Wu

42

Optimizing net present value using the risk-based cumulative construction methodology by Teryukhov Viktor Evgenievich

48

Alternative interpretable machine learning models: an application to corporate probability of default and a benchmarking analysis - by Michael Jacobs, Jr.

57

Agile approaches to big data for risk managers by Kelly Allin & Doug Downing

62

How a small oil nation is leading on ESG by Dr. Aakash Ramchand Dil

65

Climate risk – fourth pillar in the making by Venkat Srinivasan

69

Three themes that characterize trading in the energy markets today - by James Jockle

72

Organizational cybersecurity: do the basic things correctly by Ted Belanoff

Risk Manager, Financial Markets, Rabobank

Steve Lindo Principal, SRL Advisory Services and Lecturer at Columbia University

SPECIAL THANKS Thanks to our sponsors, the exclusive content of Intelligent Risk is freely distributed worldwide. If you would like more information about sponsorship opportunities contact sponsorship@prmia.org.

FIND US ON

prmia.org/irisk

02

@prmia

Intelligent Risk - November 2023


editor introduction

Carl Densem

Steve Lindo

Editor, PRMIA

Editor, PRMIA

The capstone article of our November issue brings together two themes which have recurred throughout this year’s issues of Intelligent Risk, namely artificial intelligence and climate risk. This article, entitled “The Power of A.I. Tools in Climate Disclosure,” is authored by David Carlin, who has a unique grasp of these opportunities in his role as Head of Climate Risk and Head of the Task Force on Climate-Related Financial Disclosures (TCFD) for the UN Environment Program’s Finance Initiative (UNEP FI). Additional articles on climate risk and ESG featured in this issue include “Climate Risk – Fourth Pillar in the Making” by Venkat Srinivasan and “How A Small Oil Nation Is Leading on ESG” by Dr. Aakash Ramchand Dil. Additional articles on A.I. include “Opportunities and Challenges in Applying A.I. to Risk Management” by Malcolm Gloyer and “Alternative Interpretable Machine Learning Models” by Michael Jacobs. Rounding out our roster of articles in the November issue are highlights from PRMIA’s September 26 Risk Leader Summit contributed by Bruce Fletcher, PRMIA Board Chair, plus three articles on cyber risk management, two articles on data risk and four more articles on diverse topics. As usual, we are greatly impressed with the range, relevance and quality of these contributions, and express our thanks to all of this issue’s authors and peer reviewers for sharing their expertise with the PRMIA community.

Intelligent Risk - November 2023

03


CAPSTONE ARTICLE Synopsis Making far-reaching climate-related decisions relies upon climate-related financial disclosures which are clear, comparable and credible. This article examines how the growing availability of A.I. tools and techniques can be used to improve climate-related data availability, assess its reporting quality and facilitate climate-related decision-making.

the power of A.I. tools in climate disclosure

by David Carlin why climate-related financial disclosures matter Amid a record-breaking year for global temperatures and climate impacts, societies around the world are grappling with the consequences of climate change and the shifts needed to secure a sustainable future. Current market dynamics have played a role in creating this dire state, and continuing with business as usual is no longer an option. As former chief economist of the World Bank Lord Nicholas Stern wrote in 2011 in his landmark review of the economics of climate change: “hundreds of millions of people could be threatened with hunger, water shortages, and severe economic deprivation; Climate change is the greatest market failure the world has ever seen1.” To combat that market failure, capital allocators such as governments and private actors need to have the right information about climate change and the low-carbon transition. In 2017, the Financial Stability Board (FSB) of the G20 released the recommendations of its Task Force on Climate-related Financial Disclosures (TCFD)2. The TCFD featured eleven recommendations covering four pillars of: governance, strategy, risk management, and metrics and targets. By setting out an expectation for climate-related financial disclosures, the TCFD actively sought to improve transparency and enhance information related to climate risks and opportunities. The underlying philosophy of the framework is that, with better information, markets will work more efficiently, playing a critical role in reducing both climate-related risks and accelerating sustainability. For frameworks such as the TCFD (and its successor initiatives such as the standards of the International Sustainability Standards Board) to succeed, the disclosures produced must be used to drive decisionmaking. However, for disclosures to be decision-useful, they must be clear, comparable, and credible.

04

Intelligent Risk - November 2023


Financial actors are confronting both physical climate impacts (physical risks) and the disruptions of the energy transition (transition risks). In the face of these challenges, recent breakthroughs in Artificial Intelligence (A.I.) offer potentially valuable solutions that can benefit both individual organizations and markets overall. This article will discuss three of the critical ways in which A.I. can be leveraged by financial actors to enhance both their reporting efforts and their use of climate-related financial reports. Through case studies, this article intends to cover the applications of A.I. to improve data availability, to assess report quality, and to make decisions.

improving data availability Addressing financial institutions’ data challenges Data accuracy and consistency are paramount for understanding the impacts of climate change, particularly its financial, economic, and societal consequences3. For the financial sector, data consistency and comparability are vital to assess financial stability risks, tackle climate-related challenges, and thrive in a transitioning low-carbon economy. Where gaps exist, the integration of A.I. into the climate data space can offer potential solutions to longstanding issues4. Figure 1: The dimensions of data (NGFS, 2021)

Interpolation and extrapolation When data is either unavailable or incomplete, A.I. can address these deficiencies through prediction. A.I. can be used to identify previously hidden patterns and relationships in datasets that improves the quality of extrapolation and interpolation. For example, in 2020, researchers utilized A.I. to determine temperatureinfluencing statistical weather relationships, allowing them to effectively estimate missing data points5. Even in cases where climate-related events lack a historical precedent, A.I. modelling can still provide valuable projections. For instance, A.I.-based reinforcement learning approaches can be applied to forecast future sea levels and other climate impacts under a range of scenarios. The insights generated can be of use to risk managers and policymakers alike6. Another application of A.I.-based extrapolation can be in the estimation of emissions from assets or SMEs that may not be readily available.

Parsing vast quantities of data A.I.’s ability to process data enables it to rapidly transform unstructured information into actionable insights. Highlighting this strength, a study utilized a machine learning algorithm for keyword discovery to analyse and categorize over 10,000 firms’ conference calls. Intelligent Risk - November 2023

05


This exercise measured the adoption rate of climate-conscious policies, capturing nuances related to climate change-related opportunities, physical factors, and regulatory changes7. A.I. can not only help to reduce the time between data gathering and insight generation but can also accelerate the process of running iterative analyses and testing hypotheses.

Identifying climate-related risks The complex landscape of climate-related risks and opportunities demands sophisticated tools for navigation. One of A.I.’s contributions to risk management is the evolution and refinement of Early Warning Systems (EWSs). Originally designed as comprehensive risk assessment frameworks, EWSs combine threat surveillance with predictive abilities, enabling organisations to preemptively identify and address the effects of impending threats8.

Figure 1: A.I. applications and opportunities in climate change9

Providing early warnings In the dynamic space of global commerce, A.I. can support early warning systems (EWSs) that can help mitigate asset damage and supply chain disruption. For example, Infosys’ Supply Chain Early Warning Solution combines internal supply chain data with publicly available datasets, to assess risks to suppliers and regions10. These A.I.-powered systems not only boost proactive risk management but also stabilize revenue. Companies using EWSs can navigate volatile climate, economic, and geopolitical challenges more effectively, gaining a competitive edge11. Notably, according to recent analysis, early adopters of A.I.assisted supply chain management have witnessed a decrease in logistics costs of 15% and a boost in service levels of 65%12.

06

Intelligent Risk - November 2023


assessing report quality Assessing disclosure quality For climate risk data and disclosures to advance sustainability, they need to be accepted as credible and useful. However, these disclosures often lack the level of detail needed by decision makers. As highlighted by the 2022 TCFD Status Report, many disclosures lacked the depth needed for stakeholders to make informed decisions. Although 80% of firms aligned with at least one recommended disclosure by 2021, only 4% adhered to all eleven recommended disclosures13. The diverse and often qualitative format of these reports makes it challenging to assess the disclosing firm’s climate readiness or compare it to peers.

Figure 3: Reporting Aligned with the 11 Recommended Disclosures (TCFD, 2022)

Enabling the rapid assessment of reports With natural language processing (NLP) and machine learning, A.I. can analyse disclosures by extracting and evaluating information. Research by Julia Bingler and her colleagues used ClimateBERT, an A.I. model with a deep neural network, to evaluate climate-related financial information from 818 TCFD reports. Their research found that many reports lacked adequate depth in key areas. disclosures on strategy and metrics and targets continue to lag14. Another practical application was seen in recent work by Ceres and the US National Association of Insurance Commissioners (NAIC) on insurance disclosures. Their approach incorporates a machine learning model, flagging reports that align with TCFD recommendations, and a rules-based text mining strategy that assesses the depth and comprehensiveness of these disclosures15. A.I. tools make it easier to assess the completeness and quality of reports, ensuring they are more consistent and aligned to established standards. As a result, A.I. can help regulators formulate data-driven enforcement actions and help provide investors with more consistent basis for comparison.

Intelligent Risk - November 2023

07


Evaluating compliance with sustainability standards A.I. can be used to evaluate alignment with specific sustainability standards or taxonomies. A potentially promising application could be running A.I. through vast amount of data and documents and checking for alignment with specific criteria set out in standards like the EU’s Sustainable Finance Disclosure Regulation (SFDR). One could conceive of an A.I.-informed ‘credibility score’ for disclosures that is trained on best practices and regulatory expectations, in order to bolster stakeholder confidence. Such A.I. tools could also proactively suggest refinements or areas of focus to better align with regulatory standards or leading practices across the industry.

Assessing greenwashing A.I. can also be leveraged to address greenwashing. For example, WWF’s innovative A.I.-backed tool, developed by Julia Bingler and her research colleagues, is designed to help financial regulators and investors to evaluate the robustness and authenticity of corporate transition plans to help pinpoint potential ‘greenwashing’ – when companies overstate their sustainability credentials16. A.I. can be a vital ally in screening for potential misleading claims and statements, enabling supervisors, investors, policymakers and civil society to identify signs of greenwashing more effectively. In doing so, it helps promote accountability among companies and trust in reporting.

Reviewing transition plans The emphasis on transition plans has grown as stakeholders move from asking firms whether they have set targets to demanding to see action in support of those targets. The UK’s Transition Plan Taskforce (TPT)17 was launched to develop a detailed standard for private sector climate transition plans. This work is highly necessary as few current plans have the level of detail needed to evaluate a firm. Notably, a CDP report revealed that out of 4,100 organizations assessed, only 81 displayed comprehensive alignment with the key indicators of a credible climate transition strategy18. A.I. can be used to assess the quality of plans by identifying keywords that may raise red flags around credibility, ambition, or feasibility19.

Figure 4: Elements and structure of the transition plan credibility, ambition and feasibility assessment20

08

Intelligent Risk - November 2023


facilitating decisions Decision-making through A.I.-optimized disclosures A disclosure’s value rests in its ability to guide decisions. To be effective, disclosures must offer tangible insights that assist financial actors and other stakeholders in making informed decisions about the respective company. As discussed, the quality of disclosures is often lacking, but A.I. can help overcome some of these limitations.

Reviewing past goals and predicting future success These A.I. models, enriched by data from various sources, can deduce unique insights for individual companies. For example, Intercontinental Exchange (ICE) is utilizing a decade of reported emissions data, combined with industry and company-specific details, to generate inferred emissions data, allowing assessments of whether company’s decarbonization pathway may be feasible and consistent with its previous performance21. Such analysis will enable stakeholders to consider the role of past track records in informing the likelihood of future results.

Complementing and validating information with external sources A.I. can enhance report accuracy by integrating external data like climate hazards, market trends, and supply chain metrics to highlight unseen risks or opportunities. For example, A.I. can be used to learn from high-resolution real-world data that can then be fed into physical hazard models22. Climate tool providers can then analyse physical risks based on asset locations, correlating extreme events with relevant data from flood maps and wind zones23. By providing complementary data to climate models, A.I. can help these models assess performance against external data sources and improve their projections over time.

Figure 5: An example of location-based climate risk analysis with peril assessment (XDI, 2023)

Intelligent Risk - November 2023

09


Visualizing data and communicating key findings As noted above, A.I. is well-equipped to translate vast amounts of data into meaningful insights. This can be extended to creating visualizations, dashboards, and other communication aids to mobilize leadership and organizations. A.I-driven climate solutions now empower companies with interactive, bespoke data dashboards that cater to their specific requirements. Jupiter Intelligence, for example, provides a dashboard for hazard scores at a single location in a portfolio. It not only maps out the risk level of each asset but also allows a comparison across locations and time-periods24. This use case of A.I. facilitates decision making by making the process more straightforward and intuitive.

Figure 6: Hazard score visualization from Jupiter Intelligence (Jupiter, 2023)

conclusion Research across multiple sectors shows how A.I. can enable climate-related advancements that lead to higher profits and lower emissions. In transportation, A.I. supports decarbonization through planning and automation, especially in shared mobility, EVs, and smart public transit. In industrials, A.I. can be leveraged in optimising the composition of materials to construct less expensive and lower-carbon products. In finance, A.I. assists in monitoring climate media coverage and forecasting carbon prices on emission exchanges25, which can inform investment selection. 10

Intelligent Risk - November 2023


By leveraging the power of A.I., decision-makers are better equipped to make choices that not only mitigate climate risks but also identify the most promising opportunities. A.I. has the potential to transform climate-related financial reports into transparent and actionable resources, rather than just compliance requirements. A.I.-based approaches can assist financial actors in addressing data gaps, improving report quality, and making decision-useful insights more accessible. These benefits will support a more efficient allocation of capital, greater financial stability, and an acceleration of a sustainable and resilient future.

references 1.

MIT Technology Review, Interview by David Rotman, 2011

2.

Task Force on Climate-Related Financial Disclosures

3.

IMF, 2022, Leigh & Kroese, Bridging Data Gaps Can Help Tackle the Climate Crisis

4.

NGFS, 2021, Progress report on bridging data gaps

5.

Nature Geoscience, 2020, Kadow, Hall & Ulbrich, Artificial intelligence reconstructs missing climate information

6.

NGFS, 2021, Progress report on bridging data gaps

7.

SSRN, 2020, Sautner, van Lent, Vilkov & Zhang, Firm-level Climate Change Exposure

8.

World Meteorological Association, 2022,

9.

World Economic Forum, 2018, Harnessing Artificial Intelligence for the Earth

10. Infosys, 2021, Supply chain interventions and early warning solution 11. World Meteorological Association, 2023, Early Warnings For All Initiative scaled up into action on the ground 12. VentureBeat, 2023, Fletcher, How AI can mitigate supply chain issues 13. TCFD, 2022, Status Report 14. ScienceDirect, 2022, Bingler, Kraus, Leippold & Webersinke, Cheap talk and cherry-picking: What ClimateBert has to say on corporate climate risk disclosures 15. Ceres, 2023, Climate Risk Management in the U.S. Insurance Sector 16. World Wildlife Fund, 2023, Net Zero Transition Plans: Red Flag Indicators to Assess Inconsistencies and Greenwashing 17. Transition Plan Taskforce 18. CDP, 2023, Developing and delivering a credible climate transition plan: how our accredited solutions providers can support you 19. World Wildlife Fund, 2023, Net Zero Transition Plans: Red Flag Indicators to Assess Inconsistencies and Greenwashing 20. World Wildlife Fund, 2023, Net Zero Transition Plans: Red Flag Indicators to Assess Inconsistencies and Greenwashing 21. ICE Climate Data 22. NPJ, 2023, Jones et al, AI for climate impacts: applications in flood risk 23. XDI, 2023 24. Jupiter Intelligence, 2023, ClimateScore Global 25. ACM, 2022, Rolnick et al, Tackling Climate Change with Machine Learning

Intelligent Risk - November 2023

11


peer-reviewed by Steve Lindo author David Carlin David Carlin is an acknowledged authority on climate change and sustainability. He is Head of Climate Risk and TCFD for the UN Environment Program’s Finance Initiative (UNEP FI). He is also the founder of Cambium Global Solutions, an advisor to governments, corporates, and financial institutions on climate and sustainability. He has authored numerous reports that provide practical tools and guidance to firms looking to thrive in a changing world. David advises UNEP FI’s TNFD pilot program on nature and biodiversity related risks and the Net-Zero Banking Alliance (NZBA). In addition, he is a contributor to Forbes and a Senior Associate at Cambridge’s Institute for Sustainability Leadership (CISL).

12

Intelligent Risk - November 2023


Synopsis The role of a Chief Information Security Officer today is manyfold but rarely dull. As well as broadening in skillset from deep technical expertise to strategic and business operations expertise, the CISO needs to overcome a reputational hurdle: being the “lockdown police.” The author walks through challenges, ways these can be overcome (with metrics), and looks forward to the horizon.

the evolving CISO role: a technology leader’s perspective

by Jon RG Shende introduction Today’s Chief Information Security Officers (CISOs) need to have a good balance of business operations and strategy experience in addition to their expertise in IT Security. They should take a ‘security is a business function’ approach to IT security and risk management, whereby it is part of everyone’s role. The CISO office has attracted a reputation as the “lockdown police” from product, IT, and operational teams. Instead, it must shift to being an advocate for better security and a good business partner while keeping in mind strong risk management fundamentals. This way, the CISO can minimize security impacts to business operations while defending the organization. These pressures on the CISO require a structured approach and focus to overcome, meanwhile new threats are on the horizon, including from AI.

laying the foundations IT security has to evolve beyond a “lockdown” mentality to a focus on identifying and managing risks with available tools and platforms. Solid risk management policies and processes will make costof-risk calculations much more manageable and allow a better understanding of cyber risk and the expenses associated with cyber-attacks. These expenses can include the assessed cost of remediation compared to the cost of accepting the risk and the costs, resources, and effort required to implement measures to manage or mitigate technology risks within an organization.

Intelligent Risk - November 2023

13


From a risk management perspective, cost overruns, and emerging threats are critical for technology leaders to understand and manage. Each is described below with guidance on metrics to keep tabs on the trajectory of these common risks.

expense 1: cost overruns Cost overrun refers to the unexpected increase in costs incurred by an organization beyond the budgeted amount, which can directly affect the profitability and financial stability of the organization. As technology leaders, we must incorporate robust risk assessment mechanisms to identify potential triggers for cost overruns, such as scope creep, mismanagement, and unforeseen complications. It is also crucial to employ effective cost management strategies like realistic budgeting, meticulous project management, and regular cost monitoring to mitigate the risks associated with cost overruns.

Cost overrun metrics • Budget Variance: Monitoring discrepancies between the planned budget and actual expenses. • Scope Management: Strict adherence to the defined project scope, thorough analysis and documentation of any changes. • Cost Audits: Regular audits to monitor and control the costs incurred.

expense 2: emerging threats Emerging threats refer to new and evolving malicious activities or unforeseen events that can compromise the security and integrity of the organization’s digital assets and operations. Identifying emerging threats means maintaining an informed perspective on the global threat landscape. Employing proactive defensive mechanisms can help stay abreast of the latest cybersecurity developments, enabling swift incident response plans that are vital in managing risks from emerging threats. For example, newly developed ransomware or zero-day exploits can pose significant threats to organizations, disrupting their services and compromising customer data.

14

Intelligent Risk - November 2023

Emerging threat metrics • Threat Intelligence: Update threat intelligence feeds and use advanced analytics to predict and identify new threats. • Patch Management: Ensure all systems are updated promptly, with critical patches applied within 24 hours of release. • User Behavior Analytics: Monitor user behavior to detect anomalies indicating a security threat, with real-time alerts for high-risk activities.


expense 3: technology environments There is an inherent risk to an organization with excessive tools in operational environments. As such, it is essential to pose questions when such a situation arises to prevent redundancies, escalating expenses, and unnecessary service fees that can result in tool creep. It is essential to weigh the risk and reward when considering tools to prevent redundancies, rising costs, and the unnecessary service fees that oftentimes accompany new vendor onboarding. Moreover, adding to the number of third-party vendors opens the pathway to attacks, as seen in the Target breach almost a decade ago.

Technology Environments Metrics • Latency: High latency can indicate performance issues. • Error Rate: A high error rate may indicate instability or issues. • Availability/Uptime: Measure tool availability over time. • Scalability: Metrics can help predict when to upgrade or replace a tool. • Security Metrics: Evaluate the tool’s security posture over time. • User Satisfaction: User feedback and satisfaction scores gauge usability and effectiveness. • Total Cost of Ownership (TCO): Evaluate the overall cost of implementing and maintaining the tool, including licensing, support, and operations. • Interoperability: Assess how well the tool integrates with other systems. • Compliance Metrics: Ensure the tool complies with regulations or standards. • Version and Patch Management: Track the tool version and frequency of updates and patches to ensure it remains up-to-date and secure.

Intelligent Risk - November 2023

15


cyber risk: identification to mitigation Cyber risk, defined as the identification of threats that impact the confidentiality, integrity, and availability of an organization’s technology assets, from information systems, networks, and data, to the cloud, is critical to any organization’s risk management strategy. Some steps typically taken as we move to identify and remediate or manage risks are as follows: • Conduct a risk assessment • Identify potential threats • Evaluate vulnerabilities • Prioritize risks based on cost analysis, from impact to remediation • Develop a risk management plan • Review and update regularly Every one of us, from IT Audit to Security Operations to Risk Management, have had to assess cyber risk and discuss controls around some of these risks. The critical thing about controls, as we develop and implement them, is to keep in mind that the effectiveness of controls and their testing is dependent on risks and threats an organization faces. As we start on a path to assess the probability and impact of a cyber-attack, evaluating the potential severity of the impact, having qualitative assessments, scenario-based analysis, to include threat modeling; will provide us with realistic datasets that can be used in conversations with a CIO, CFO and CEO. Analysis and data collected from assessments provide CISOs with critical insights needed to plan for risk responses and budgets. Part of these executive conversations should touch on the controls that will have to be improved or implemented, such as technical controls (e.g., firewalls, antivirus, endpoint detection and response, logging, monitoring and encryption), administrative controls (e.g., training and awareness, policies and procedures), and physical controls (e.g., security cameras and badged access controls). Organizations can also leverage tools such as the MITRE ATT&CK framework to evaluate their existing security controls. This works by mapping their security controls to the techniques used by attackers in the framework. For example, with endpoint protection, the framework maps the endpoint protection capabilities to the techniques used by attackers in the framework to identify gaps in its defenses. Metrics such as the percentage of security controls mapped to the framework and the number of gaps identified can be used to measure the effectiveness of an organization’s security controls. Finally, leaders will also have to plan for costs around continual monitoring and reviewing of controls effectiveness to ensure they are working as intended.

16

Intelligent Risk - November 2023


on the horizon Over the last few years, we have seen the rise of ransomware, APTs (advanced persistent threats), and threats from natural disasters. We now have to contend with the ever-increasing threats from AI and machine learning-generated attacks. These newer “living threats” present a unique challenge for threat protection systems, which are often unable to keep pace with their constantly evolving tactics, techniques, and procedures.

conclusion CISOs are managing their own reputation within the organization while trying to embed security awareness broadly. They need a strategic perspective to go with the team’s technical know-how. They must control the most common expenses, preferably by instituting metrics and monitoring, while delivering value for the costs they incur. The asks on today’s CISOs are many, given the constantly evolving and shifting threat landscape, emergence of AI crafted attacks, new and updated regulatory requirements, data privacy, understand more of business functions and optimizing operational budgets.

peer-reviewed by Carl Densem author Jon RG Shende, MSc FBCS CITP CISM, Gagan Satyaketu Jon Shende is a Business Technology Leader with over two decades of experience spanning multiple verticals and roles. With a start in the oil and gas business, working with SCADA Systems moving to Network Operation, Cloud and Security Product development, he has seen the evolution of technologies to where we are today in roles ranging from Product Security, to Head of Digital Engineering within AWS and Azure, and Corporate Security as a CISO. He is a graduate of The University of Oxford’s advanced computing program, and earned a Master’s Degree in IT Security, Digital Forensics and Computer Crime Law from Royal Holloway, the University of London. He holds the Chartered IT Professional designation and is a Fellow of the British Computer Society (BCS), The Chartered Institute for IT along with the CISM certification and several vendor technology certifications.

Intelligent Risk - November 2023

17


Synopsis PRMIA’s European Risk Leader Summit was held in London on September 26th and attracted an audience of senior risk professionals and executives. Alongside keynote addresses on geopolitical risks and the economic outlook, insights from audience polling demonstrated lower credit and liquidity concerns, the need for better emerging risk management and heightened interest in operational risks, notably cyberattacks and AI.

critical insights and surprising poll results shared at PRMIA Risk Leader Summit

by Bruce Fletcher Many insights were revealed at the 8th Annual European Risk Leader Summit in London on 26 September. More that 80 senior risk professionals gathered, including more than 20 Chief Risk Officers, who shared their perspectives and best practices across a range of pressing risk management topics. The event was led by the London Chapter of PRMIA, which is led by Rustum Bharucha. PRMIA is a member-led not-for-profit organization which operates worldwide to support risk managers through networking and training opportunities. During the event PRMIA CEO Justin McCarthy and Chairman Bruce Fletcher outlined the new suite of refreshed and improved training programs and modules available to members and other risk professionals. Key take-aways from the day were reviewed at the conclusion of the program and are outlined at the end of this article. The program included two keynote addresses, one from Scott Livingstone, Special Advisor to NatWest on geopolitical risks, and the other from Alexander Plekhanov of the European Bank for Reconstruction and Development on the economic outlook. Panels and presentations included deep dive discussions on credit and funding risks, risk culture, digital assets, climate transition risks, ERM leadership, and AI. The tone was set by a Horizon Scanning session moderated by Bruce Fletcher. CROs Richard Blackburn of HSBC Europe/Global Banking and Markets, and Phillip Best of asset manager Evelyn Partners, joined Raj Singh who is a Non-Executive Director at Vanguard and Allied Irish Bank for this session. Raj brought in Board imperatives as well as international markets and insurance industry insights. Audience polling during the session provided further material for examination, with some surprising results.

18

Intelligent Risk - November 2023


While the economic and geopolitical risks were identified by 73% of the audience as the most significant risk they are worried about over a 2-year horizon, credit risks ranked only 4th on their list of concerns. Participants, who were largely from banks and asset managers, believed that lessons had been learned since the Global Financial Crisis and that credit risks to institutions, corporations, SMEs and consumers were in good condition in general. 57% of those polled believed that credit performance would get somewhat worse, although 22% believed it would be about the same and 22% somewhat better. However, portfolios are not without risks in sectors such as leveraged finance, commercial real estate, and unsecured consumer credit. It was believed that the more significant aspects of these risks lie outside of regulated banks, in the private credit and non-bank space. Also, surprising to some was that funding and liquidity risks were only 6th on the list of concerns, despite the continued depletion of consumer and corporate COVID savings, and attraction of higher return opportunities for depositors, which has reduced the level of deposits in the banking system. Unsurprisingly, continuing the trend of the past 10 years, operational risks ranked high and came in as the 2nd most significant concern, with regulatory compliance risks and fraud at 3rd. The top 3 operational risks identified by the audience were discussed as being interrelated: technology disruption, information loss/data, and operational resilience. The interaction between elements such as AI, use of 3rd parties, and more sophisticated cyber attacks were discussed; specific approaches to manage AI risks were outlined. Polling questions also identified that while participants believed that risk management in the areas of emerging risks has improved, more work is still needed. While 45% believed that they have gotten either ‘Better’ or ‘Significantly Better’ at identifying and managing emerging risks, more than half believed their capabilities were about the same. In the panel on Emerging Risks, it was discussed that risks are becoming more interconnected and are emerging with greater speed and complexity, and this means further improvement is required. During the Enterprise-Wide Risk Management (ERM) session, the polling results showed that the use of an ERM approach was rated either ‘Strong’ or ‘Comprehensive’ by 50% of participants. However, 41% described their management of risks as only ‘Partial’, as executives and non-executives only understood some risk elements but were not fully understanding and managing all risk areas to ensure proper deployment of resources. Fortunately, only 9% described their ERM usage as ‘Limited’, where risks were often only understood in a siloed and not comprehensive way. This represents an opportunity where risk leaders with the help of PRMIA can provide further training and support.

Intelligent Risk - November 2023

19


Bruce Fletcher concluded the day summarising 7 key themes: 1. There were many examples during the Summit where risk management at financial institutions has continued to improve, especially in the areas of credit risk, model risk, culture measurement and risk management frameworks. 2. However, more improvement is still needed, as poll results largely indicate that risk management, on average, collectively in the industry is ‘good’, but not ‘great’ in all areas. Special focus on emerging risk management was urged, as well as on strengthening holistic ERM. 3. Regarding the role of the Risk function, it is clear that risk management continues to embed well in the 1st Line of Defence, indicating success at the direction of travel these past years where ‘we are all risk managers’. 4. The session on culture change demonstrated that culture can and must drive proper risk management. It is not an esoteric concept, as it can be measured, changed with specific interventions, and therefore properly managed. 5. There are still some significant financial risks that the industry must manage, including post-peak inflation consequences on clients, higher rates for longer impact on asset valuations and markets, and the need to better understand and manage the behaviour of depositors. Non-financial risks in the areas of AI, 3rd party suppliers and new conduct regulations also require additional focus. 6. Looking at these risks in a nuanced, segmented and detailed way is critical to understanding them and managing them. It is not an all or nothing proposition. While certain risks must be avoided altogether, many of these risks are unavoidable or must be taken to satisfy a firm’s objectives. 7. The level of networking and information sharing that was evident during the day was commented on as very important to each firm’s success. It is critical that great risk managers need to always be looking outwards, and that attendance at events like this and PRMIA membership are an easy way to accomplish this.

Download the full package of polling results.

author Bruce Fletcher, PRMIA Board Chair

20

Intelligent Risk - November 2023


Synopsis AI is no longer restricted to centres of methodological excellence with extensive computing and data processing resources. In this article the author discusses where AI is being and could, in the near future, be applied by risk managers to solve complex problems. In doing so, risk managers will need to upgrade their skills and work alongside new partners in the organization. An example is provided of how backpropagation neural networks, a near 40-year-old machine learning tool, can be used by risk managers to better understand non-linear relationships between luxury asset, GDP and DJIA data sets. The case study uses rare whisky bottle returns as an independent variable.

opportunities and challenges in applying AI to risk management

by Malcolm Gloyer introduction Following the publication of their pathbreaking paper introducing backpropagation (the basis of neural net training) in 1986 (Rumelhart et al., 1986), the authors had to wait until the market penetration of online retail, search and community reached near saturation before Artificial Intelligence’s (AI) contribution to modern society was fully recognised. Now AI appears to be everywhere from advertising to robotics. AI is used in drug classification for pharmaceutical research, pattern recognition for FX trading and electronic sensory development, as well as Deep Mind research where the AI software runs through hundreds of decisions until it learns to predict an effective solution. AI supports developments in healthcare, life sciences, cryptocurrency “mining” and self-driving cars using a kind of machine learning called neural networks in which a computer creates a network to recognise patterns using the learning data set and then makes a decision based on this network. Plant Jammer, for example, is an app that creates recipes based on available food ingredients and personal preferences by searching a database of recipes to find often-paired items. Using AI to find new combinations of flavours for cupcakes and cocktails allowed UK-based media agency Tiny Giant to differentiate itself from its peers. Generative or conversational AI enables apps like ChatGPT to assist students with essay writing as well as potentially offer businesses customer service efficiencies and healthcare alternatives to talking therapy.

Intelligent Risk - November 2023

21


Tesla Dojo is a supercomputer designed and built by Tesla for computer vision video processing and recognition that will be used for training Tesla’s machine learning models to improve its Full Self-Driving (FSD) advanced driver-assistance system. According to Tesla, it went into production in July 2023. With such widespread value already shown by AI, what are the challenges and opportunities for the risk management industry as it transitions and adopts AI techniques and technologies?

challenges First is the challenge of AI inexperience among risk managers, which can manifest as difficulty using AI to draw conclusions from large data sets. Working with data, as anyone in the field knows, is a fast-rising requirement among all levels of risk managers. Comfort with an array of tools, techniques, visualizations and automation methods is becoming a staple for risk roles, soon to be joined by AI. Luckily, it need not take years of study to begin using AI in risk management application. As we show in the example later, basic risk management concepts can quickly be applied and extended by using AI methods. The lack of AI understanding can also be addressed through experimentation, attending training courses or by pairing risk and data science project teams. Enabling risk to talk the same language as AI practitioners is often the first step to breaking new ground.

opportunities AI not only improves on traditional techniques but can simplify the process. Just think of AI-based code generation tools used to skip over the cumbersome code writing step in heavily quantitative models. Simplification enables new development opportunities including developing AI neural networks using the labyrinth of models available in Python into valuation apps that allow investors with a demonstrated dependence on economic metrics to consider non-linear valuations of such assets that are less dependent on historical timeseries data than linear models. Other development candidates include the use of AI neural networks to enhance valuation of complex derivatives like housebuilder options on land for future development. Similarly Modern Portfolio Theory could be enhanced by including a non-linear version of Markovitz’s Efficient Frontiers. Machine learning can be applied to the calculations of Value-at-Risk (VaR) and Potential Future Exposure (PFE) to include neural networks for directional prediction in market risk and credit risk, thereby optimising investments banks’ use of capital. By doing so, the financial markets may avoid future market disruption resulting from risk assessments that depend solely on linear models.

22

Intelligent Risk - November 2023


Linear catastrophe modelling by insurance and re-insurance companies would also be enhanced by including non-linear neural networks to determine more accurate risk management and pricing strategies while better ensuring that individual companies are resilient enough to withstand major disasters affecting their insured assets. Classification neural networks can pre-select suitable hedging trade candidates to offset regulated metrics like EMIR’s Gross Notional Value or IFRS 9.

conclusion AI is not yet prevalent in the daily lives of risk managers, but it soon will be. The scope of applications is vast and, once AI understanding is developed, simpler than traditional techniques. For risk managers looking to shorten the learning curve, there are available resources and often practitioners at your firm to learn from.

appendix: applying ai in portfolio diversification example We provide a typical example of portfolio diversification by demonstrating the relationship between luxury asset returns and traditional financial asset returns. Analysis uses monthly changes from 2013 to 2020 in the Whiskystats Whisky Index (WWI)1, US Gross Domestic Product (GDP)2 and the Dow Jones Industrial Average (DJIA)3:

Figure 1: Monthly Percentage Changes in WWI, GDP and DJIA 1 / https://www.whiskystats.com/ 2 / https://fred.stlouisfed.org/series/GDP 3 / https://www.wsj.com/market-data/quotes/index/DJIA

Intelligent Risk - November 2023

23


A key investment feature of rare whisky bottle returns appears to be their independence from financial market returns. The annualised standard deviation of monthly changes in the WWI from 2013 to 2020 is 10% compared to 6% for US Gross Domestic Product (GDP) and 15% for the Dow Jones Industrial Average (DJIA). While monthly changes in GDP and DJIA have a correlation of 0.54 (quarterly changes have a correlation of 0.39), the correlation of monthly changes in WWI and GDP is 0.04 (quarterly changes have a correlation of -0.13), the correlation of monthly changes in WWI and DJIA is 0.12 (quarterly changes have a correlation of -0.14), and the correlation of monthly changes in WWI and one month lagged DJIA is 0.04 (quarterly changes also have a correlation of 0.04). Similar results were obtained using ranked monthly changes to indices and by varying time lags.

Figure 2: Monthly GDP vs DJIA Change

Figure 3: Monthly WWI vs DJIA Change

python application of non-linear ML analysis The development of Python has enabled abridged backpropagation neural networks to conclude that nonlinear machine leaning analysis is consistent with the above correlation analysis when data are separated into a model training set and a model test set in an attempt to predict WWI returns using GDP and DJIA returns.

Figure 4: Python Regression Code

24

Intelligent Risk - November 2023


Figure 4: Python Regression Reults

The coefficient R2 indicates dependence between variables with a score of 1.0 and independence if 0 (it can be negative too, because the model can make the comparison arbitrarily worse). Having trained the model using most of the data and then applied this model to the test sets, the result is a low R2 score of 0.12. We conclude that there is little relationship between monthly changes in GDP, DJIA and WWI using either a linear correlation measure or a non-linear machine learning algorithm neural network, therefore no evidence against the hypothesis that luxury asset returns are independent of GDP and DJIA returns.

references Rumelhart, D. E., Hinton G. E. & Williams, R. J. (1986). Learning representations by back-propagating errors. https://www.iro.umontreal.ca/~pift6266/A06/refs/backprop_old.pdf

peer-reviewed by Carl Densem author Malcolm Gloyer Malcolm Gloyer, Chartered Member of the Chartered Institute for Securities and Investments, explains some solutions to cryptocurrency delta hedging. As a Certified Practicing Project Manager (CPPM MAIPM), Malcolm has more than 30 years’ experience working on projects in the UK and Australia, specializing in market risk, derivatives and commodities. Malcolm has worked as a consultant at companies including Bank of America Merrill Lynch, London Metal Exchange, Nomura, ABN Amro, EDF Trading, Santander and Lloyds Bank and has been a guest lecturer at several universities. Malcolm has had articles published in professional investment magazines and has written several eBooks. Intelligent Risk - November 2023

25


Synopsis Crypto currencies have emerged as a revolutionary financial technology, providing new opportunities for decentralized transactions and secure digital assets. However, as the popularity of crypto currencies continues to grow, so does the risk of cyber-attacks and security breaches. This article explores the future impact of cyber risk on crypto currencies, analysing potential vulnerabilities, assessing the evolving threat landscape, and discussing strategies to mitigate these risks. Through a scientific and analytical lens, this article aims to shed light on the challenges and opportunities that lie ahead in the realm of crypto currencies and cyber risk.

future impact of cyber risk on crypto currencies

by Dr. Sanjay Rout introduction Crypto currencies, such as Bitcoin and Ethereum, have gained significant attention and adoption in recent years. Their decentralized nature, transparency, and potential for financial inclusion have attracted individuals, businesses, and investors worldwide. However, as the value and importance of crypto currencies increase, so does the interest of malicious actors seeking to exploit vulnerabilities and undermine the security of these digital assets. 1. Cyber Risk Landscape Crypto currencies, such as Bitcoin and Ethereum, have gained significant attention and adoption in recent years. Their decentralized nature, transparency, and potential for financial inclusion have attracted individuals, businesses, and investors worldwide. However, as the value and importance of crypto currencies increase, so does the interest of malicious actors seeking to exploit vulnerabilities and undermine the security of these digital assets. 2. Vulnerabilities in Crypto currency Infrastructure The infrastructure supporting crypto currencies is not immune to vulnerabilities. Smart contract vulnerabilities, exchange hacks, wallet breaches, and vulnerabilities in underlying blockchain technology pose significant risks to the security and integrity of crypto currencies. These vulnerabilities, if left unaddressed, can have far-reaching consequences for the future of crypto currencies.

26

Intelligent Risk - November 2023


3. Evolving Threats As crypto currencies evolve, so do the tactics and techniques employed by cybercriminals. Advanced persistent threats, nation-state attacks, and sophisticated hacking groups pose significant challenges to the security of crypto currencies. The future will witness more targeted and sophisticated attacks, exploiting weaknesses in infrastructure, networks, and human factors.

mitigating cyber risks To mitigate cyber risks in the crypto currency ecosystem, a proactive and multi-faceted approach is required. This includes: 1. Strengthening Security Measures: Crypto currency developers, exchanges, and wallet providers must continually enhance security measures, including robust encryption, multi-factor authentication, and regular security audits. 2. Education and Awareness: Users should be educated about best practices in securing their crypto currencies, including the importance of strong passwords, phishing awareness, and the use of hardware wallets for storing digital assets securely. 3. Regulatory Frameworks: Governments and regulatory bodies need to develop comprehensive frameworks that address cyber security concerns in the crypto currency industry. These frameworks should encourage adherence to best practices, transparency, and accountability among crypto currency service providers. 4. Collaboration and Information Sharing: Industry collaboration, sharing of threat intelligence, and coordinated responses are essential to staying ahead of emerging cyber threats. Partnerships between crypto currency organizations, cyber security firms, and law enforcement agencies can help in the timely identification and mitigation of risks.

future outlook As cyber risks evolve, so must the cyber security measures employed in the crypto currency ecosystem. Innovations such as decentralized identity solutions, secure hardware wallets, and privacy-enhancing technologies can enhance the security and resilience of crypto currencies against emerging threats. The future impact of cyber risk on crypto currencies depends on the collective efforts of stakeholders to address vulnerabilities and strengthen security measures. While challenges persist, advancements in cyber security technologies and practices offer hope for a more secure and resilient crypto currency ecosystem. It is evident that the future impact of cyber risk on crypto currencies will have far-reaching implications for individuals, businesses, and the broader financial ecosystem.

Intelligent Risk - November 2023

27


As the value and adoption of crypto currencies increase, the incentive for cybercriminals to target these digital assets will also rise. Therefore, it is imperative for stakeholders to remain vigilant and proactive in addressing cyber risks and enhancing security measures. By strengthening security measures, investing in robust encryption, multi-factor authentication, and regular security audits, the crypto currency community can build a solid foundation of trust and confidence. Additionally, educating users about best practices in securing their digital assets and promoting awareness about potential cyber threats can empower individuals to take proactive steps to protect their crypto currencies. Regulatory frameworks play a crucial role in creating a secure environment for crypto currencies to thrive. Governments and regulatory bodies need to collaborate with industry experts to develop comprehensive frameworks that address cyber security concerns. These frameworks should emphasize transparency, accountability, and adherence to best practices among crypto currency service providers. By establishing clear guidelines and regulations, regulators can foster an environment that promotes responsible behavior and mitigates potential risks. Collaboration and information sharing among industry stakeholders, cyber security firms, and law enforcement agencies are paramount in combating emerging cyber threats. By sharing threat intelligence, responding to incidents in a coordinated manner, and staying updated on the latest trends in cybercrime, the crypto currency community can effectively stay one step ahead of malicious actors. Furthermore, innovations in cyber security technologies and practices hold significant promise in fortifying the security of crypto currencies. Decentralized identity solutions, secure hardware wallets, and privacyenhancing technologies are just a few examples of innovations that can bolster the resilience of crypto currencies against evolving cyber risks. Embracing these innovations and investing in research and development will be crucial in staying ahead of the curve.

conclusion As crypto currencies continue to shape the future of finance, the impact of cyber risk cannot be overlooked. The evolving threat landscape requires continuous vigilance and proactive measures to safeguard the integrity and security of crypto currencies. By addressing vulnerabilities, promoting education and awareness, fostering collaboration, and embracing innovative cyber security solutions, we can navigate a cyber-safety pathway for a futuristic approach to various industries. The future impact of cyber risk on crypto currencies is a complex and multifaceted challenge. However, with a collective and concerted effort from stakeholders across the crypto currency ecosystem, it is possible to navigate these challenges successfully. By implementing robust security measures, promoting education and awareness, fostering collaboration, and embracing innovative cyber security solutions, we can pave the way for a secure and resilient future for crypto currencies. 28

Intelligent Risk - November 2023


With these measures in place, crypto currencies can continue to flourish as a transformative financial technology, empowering individuals and businesses while safeguarding against the ever-present cyber risks.

peer-reviewed by Carl Densem

authors Professor (Dr.) Sanjay Rout Professor Sanjay Rout is an eminent researcher, coach, legal expert and journalist. He is currently working as MD of KPAR Business Consulting, a futuristic advisory firm revolutionizing the world of consulting and research services. With expertise in public policy, law, public finance, development, impact research, technology growth, merger and acquisition, and management consulting, KPAR offers tailored solutions to help organizations thrive globally. He has two decades of experience in various global projects on futurism, technology, business, management and growth.

Intelligent Risk - November 2023

29


Synopsis Interconnected risks are difficult to recognize before the initial event but are apparent with hindsight. While risk managers can only hope to become better at spotting warning signs, they can use previous interconnected risks to their advantage in the form of scenario analysis. Constructing scenarios where multiple risks occur helps organizations prepare for the next big event, such as climate change, and understand its impacts.

learnings from interconnected risks to prepare for climate risk

by Sonjai Kumar introduction The 2008 financial crisis. COVID-19. The Silicon Valley Bank debacle. Interconnected risks can be catastrophic. As we have seen with these examples, when it comes to interconnected risks, the crystallization of one risk can lead to multiple others. The main risk can have a cascading effect on the organization— and it may not be clear which will have the greatest impact: this main risk, resultant risks, or a combination of all. Most organizations performing risk management, whether integrated or in silos, plan for a single risk. Even stress tests are performed on one risk at a time and, often, catastrophic events are not caught. There is no doubt that it is challenging to spot the interconnected risks since historical data is not always available to calculate correlations— though initial signs of risks are always visible, and qualitative judgments are always to be applied. Because of the challenges of spotting interconnected risks, their mitigation becomes difficult, and an organization may easily find itself in crisis management mode.

interconnections in action: COVID-19 Take the example of the COVID-19 Pandemic in late 2019. The novel coronavirus hit one country and rapidly spread to the rest of the world. The virus, in many cases, caused symptoms which included difficulty breathing, a higher risk of hospitalization, and sometimes loss of life. The spread of the virus globally was unanticipated, leading to lockdowns by different governments to limit contagion. The lockdowns led to the loss of jobs and the closure of factories, which sparked economic downturn.

30

Intelligent Risk - November 2023


Hospitalization claims and related deaths adversely affected insurance and reinsurance companies. As many of the insurance companies had passed on high risks to reinsurance companies, all reinsurers globally suffered very adversely. Various central banks reduced policy interest rates to boost economic activity; however, the movement in interest rates brought about asset-liability mismatches for financial institutions. For insurance companies selling maturity guarantee products, low interest rates are a risk.

Figure 1: Interconnected Risks Example 1

The interconnectedness of risks can be seen: coronavirus leading to high insurance claims, lockdown, loss of jobs, economic downturn, and falling interest rates which, via their balance sheets, also negatively impacted insurance companies. Such emergence of risks from one main source can be studied through scenario analysis. Various plausible scenarios based on the current economic, demographic, and other factors need to be created and stresses applied through an array of scenarios to assess the potential impact on a company’s finances.

the Russia-Ukraine war and SVB’s demise Certain risks should be known to the organization, but leadership either ignore them or are confident that the risks will not crystalize. Take the example of Silicon Valley Bank, which invested a high proportion of assets in government bonds when interest rates were low knowing that high interest rates could cause severe mark-to-market losses. The Russia-Ukraine war contributed to increased inflation, which led to central banks increasing rates, and triggered this exact outcome for Silicon Valley Bank. Other risks, like liquidity risk and customer withdrawals, crystalized in quick succession. In this situation, scenario analysis could have helped understand and predict the effect of interest rate stress.

Intelligent Risk - November 2023

31


Figure 2: Interconnected Risks Example 2

Financial institutions should be practical in creating scenarios that could potentially lead to the failure of their organization. Spotting these scenarios quickly when an event happens and appropriately timing intervention is key to successfully avoiding possible failure. Velocity of risk plays a critical role in risk crystallization, so timely action is important.

the case of climate risk A forward-looking case for the use of scenario analysis in the current context is climate risk. What could be the possible scenarios resulting from an increase in global temperature by increments of 0.25 degrees Celsius and the subsequent risks to human lives, agriculture, the economy, and the financial system? The impact of climate change on the job market could be over 900,000 lost job opportunities per year over the next 50 years in the US alone1.This can have a significant impact on the unemployment rate, resulting in negative consequences for lifestyle, mental health, and socioeconomic status. Further, the Swiss Re Institute anticipates rising temperatures will cut global GDP by up to 18% over the next 30 years2.This will have a significant impact on all of mankind. We are short of our 2030 net-zero emissions targets, and experts are already mentioning “Global Boiling” rather than “Global Warming.” Resultant rises in temperature may lead to prolonged excessive rain, impact daily human lives by creating a shortage of food items, inflate prices, adversely affect agriculture, jeopardize banking systems, grow insurance claims, and spread waterborne disease in addition to any unknown combinations of issues. This is not an unrealistic scenario as unfortunately predicted by unprecedented flooding in many parts of Thailand from July 2011 to mid-January 20123.The flood was so heavy that 13.6 million people were impacted and 3.3 million structures (750,000 of them residential) sustained damage as 65 of 77 provinces were designated as flood disaster zones—totalling USD $45.4 billion in damages. The catastrophe in Thailand offers risk managers seeking climate risk-based scenario data a way of constructing realistic shocks. Recommended reading4 on this disaster will help financial institutions build a full understanding, with the benefit of hindsight, of previously unforeseeable impacts. 32

Intelligent Risk - November 2023


conclusion There are many interconnected relations and resulting risks that the world needs to be prepared for. Interconnected risks are very important, and past events such as the 2008 economic crisis, COVID-19, the failure of Silicon Valley Bank, and floods in Thailand demonstrate that the causes of catastrophic events are correlated risks, not linear risks. Interconnected risks have quick resultant risks like nuclear fission, and by the time the primary risks are addressed, new risks explode. Historical events and well-constructed scenarios enable risk managers to use what we know about interconnected risks and see what unexpected risks emerge for their organization. This cannot solve future crises, given their unpredictable nature, but it can better prepare those with the ability and the speed to act.

references 1.

“Deloitte Report: Inaction on Climate Change Could Cost the US Economy $14.5 Trillion by 2070” (Jan 25, 2022). Deloitte. https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/deloitte-report-inaction-on-climate-changecould-cost-the-us-economy-trillions-by-2070.html

2.

“Annual Report 2021” (2022). Swiss Re Institute. https://reports.swissre.com/2021/vision-and-strategy/market-trends/climate-change

3.

Sousounis, Peter. “The 2011 Thai Floods: Changing the Perception of Risk in Thailand” (April 19, 2012). Verisk. https://www.air-worldwide.com/publications/air-currents/2012/The-2011-Thai-Floods--Changing-the-Perception-of-Risk-inThailand

4.

Kaewkitipong, L., Chen, C. and Ractham, P., 2012. “Lessons learned from the use of social media in combating a crisis: A case study of 2011 Thailand flooding disaster” (January 2012). https://www.researchgate.net/profile/Peter-Ractham/ publication/289741588_Lessons_learned_from_the_use_of_social_media_in_combating_a_crisis_A_case_study_ of_2011_Thailand_flooding_disaster/links/5efc448445851550508103d7/Lessons-learned-from-the-use-of-social-media-incombating-a-crisis-A-case-study-of-2011-Thailand-flooding-disaster.pdf

authors

peer-reviewed by

Sonjai Kumar, CFIRM

Elisabeth Wilson

Sonjai Kumar is a consulting partner in Tata Consultancy Service working in India under the BFSI CRO Risk Advisory. He has a total working experience in the insurance sector close to three decades under both industry and in consulting areas. His expertise is in the areas of actuarial, enterprise risk management, operational risk, insurance and financial, risk culture, corporate governance etc. He is an enthusiastic risk management professional, a certified fellow member of Institute of Risk Management, London, and currently pursuing PhD in Enterprise Risk Management in the insurance sector.

Intelligent Risk - November 2023

33


Synopsis Insufficient risk data capabilities at banks, and therefore poor regulatory reporting, is now acknowledged by regulators as not only a factor in previous financial crises but also a hinderance to their responsiveness going forward. In this article, the author presents the case for an entire new risk stripe (or category) in the taxonomy: data risk. The argument is bolstered by discussion of where responsibility ought to lie and the ideal data risk management lifecycle.

incorporation of data risk in the banking risk taxonomy

by Mayank Goel introduction After the financial crisis, regulators found some banks lacked proper Management Information Systems (MIS), hindering risk management and reporting [BIS, 2013]. To strengthen these capabilities, especially at global systemically important banks (G-SIBs), the Basel Committee on Banking Supervision (BCBS) in 2013 introduced Standard 239, focusing on effective risk data aggregation and reporting. Coupled with the US Federal Reserve regulations, like Comprehensive Capital and Analysis Review (CCAR) and supervisory emphasis on data quality [FRB, 2011], this highlights the need to manage data risks just as well as credit and market risks.

data risk management With the proliferation of data available to banks in this digital age, data is used in almost all aspects such as using it for making business strategy decisions, in quantitative models and for regulatory reporting. Thus, there is a need to manage and govern an organization’s key data and associated risks end-to-end; from risk identification, assessment, appropriate mitigation through implementation of risk-based controls and ongoing monitoring. This process is known as data risk management.

Arguments for managing data as its own risk stripe Traditionally, banks managed risks such as credit, market, operational, compliance, strategic, reputational risks through various high-level committees. While data is inherent to most of these risks, the need to govern it separately is often not recognized.

34

Intelligent Risk - November 2023


When the analyst(s) responsible for determining the right data attributes investigates the warehouse, they are going to observe a number of similarly labeled data attributes such as country, operational address country, mailing address country, registered address country. Without a well-defined and agreed upon definition of customer’s legal country of incorporation, there is a risk that the inappropriate data attribute may be provided on the credit risk report. Further yet, if the source of these attributes is not well documented, there is a risk of an inadvertent, unrecognized change to these attributes, which would put the consistency of the report in question. In recognition of these risks, data risk in recent years has begun to be recognized formally at the highest levels in banking through establishment of data governance functions and the role of chief data officer. Lastly, banking being a highly regulated sector [Brown & Dinç, 2011], regulators routinely raise questions on data governance controls such as the presence of internal controls that demonstrate data quality and ability to produce timely and accurate reports to understand the data risk environment. Given these heightened expectations, there is a strong case for managing data as its own risk stripe.

Why second line data governance is best positioned to manage data risk As highlighted by Ajiri [FSFP, 2021], data risk holds regulatory significance in banking. Data risk can emerge during any phase from collection to usage. To address this, the second-line data governance function bridges data governance and risk management, ensuring data risks are adequately managed. This involves providing governance to the business, implementing risk-based controls, and providing data that is fit for purpose for risk models.

data risk management steps To manage data risk, it is important to setup a lifecycle process that can help to understand and prioritize the finite set of the most critical data attributes within the organization, assess the associated business processes to determine the risk, implement risk-based controls to mitigate those risks and periodically monitor the implemented controls.

data risk identification KDE Identification: As a first step to risk identification both from a governance and risk perspective, identify the most critical data attributes within your organization and their associated risk exposure. These systemagnostic critical attributes commonly known as key data elements (KDEs) are the data elements that carry the most weight in the organization’s decision-making or regulatory reporting. Given the increasing cost pressures in banking, it is important to create a prioritized list of these KDEs. To identify KDEs, the following factors may be considered: •

Do risk models directly rely on this attribute for making decisions?

Is the KDE atomic or is it a derived attribute?

Intelligent Risk - November 2023

35


Are there current or prior regulatory issues associated with the attribute?

How material of an impact would inadequate data quality of this attribute have on the organization?

Is this attribute important for business revenue generation or customer satisfaction?

Data Risk Assessment Once KDEs have been identified, a risk assessment can be performed to identify potential associated risks. This analysis can either be quantitative or qualitative depending on the business domain and should take into consideration relevant existing controls for the KDE. As an example, quantitative assessment will generally be suitable for financial data used for regulatory reports where a dollar value can be assigned to the risks. A qualitative assessment will be more suited for functions such as compliance where expert judgment may need to be layered on top of modeled results. In a qualitative assessment, severity and probability of risk occurrence can be determined through data owner and subject matter expert interviews. In both types of assessment, the eventual goal is to determine cost of implementing appropriate controls to mitigate the risk versus the cost if the risk were to materialize. As noted by Martins et al [Heliyon, 2022], this assessment process has to be periodic, and a regular reassessment process must be established.

Mitigation of data risks Following data risk identification and impact assessment, risk-based controls can be implemented for mitigation. Ongoing risk monitoring ensures relevant risk management, reflecting the control environment and senior management oversight. Effective governance, operationalized and embraced across the organization, is crucial in managing data risks. This involves policies, standards, roles, documented procedures, enhanced communication, and fostering a data-centric culture. Policies & standards: These documents define at a high level the minimum set of data requirements that each business unit is expected to adhere to. They set the tone from the top, are generally created by the second line of defense and are generally rooted in banking regulatory requirements on data. Policies while setting the minimum requirements should also be flexible and not overly prescriptive in how a particular business unit achieves compliance with the policy. Change management-based controls: Managing changes in data infrastructure can be challenging. As an example, a report data attribute previously containing only numbers transitions to alphanumeric values resulting in data quality failures. In such instances, change management controls can serve as a preventative control against adverse impact to a KDE. Related to establishing a strong data culture, all changes should be logged, assessed and tested for KDE impact. This process requires consultation with relevant stakeholders and their approval. Role of data culture in data risk mitigation: An effective data risk culture is essential to any data driven organization.

36

Intelligent Risk - November 2023


A proactive risk management culture self-identifies and manages risks prior to a review function such as audit or regulators. Often organizations are good at identifying risks but either take too long to mitigate or simply don’t implement mitigation with the belief that the risk will not materialize. This is especially true in cases where legacy data is concerned, the argument being that the risk has existed for many years. A lack of documentation on KDEs combined with the departure of key personnel with institutional knowledge about these KDEs can also hamper timely data risk mitigation. A Consequence Management Framework can be implemented for maturing the data culture within the organization. Rewarding staff for timely and appropriate identification of data risks may be beneficial.

conclusion In the absence of enough literature on data risk management in general, and specifically within the banking industry, this paper highlights the importance of data risk management being its own risk stripe. It also presents key considerations for the identification, assessment and mitigation of data risk. Disclaimer: The views expressed in this report solely reflect the personal views of the primary author of this article, about the subject matter referred to herein, and such views may not necessarily reflect the thoughts and opinions of MUFG Bank, Ltd. and its affiliates or management team. No part of such author’s compensation was, is, or will be directly or indirectly related to the specific recommendations or views expressed herein. This should not be construed as investment advice, a recommendation to enter into a particular transaction or pursue a particular strategy, or any statement as to the likelihood that a particular transaction or strategy will be effective and it does not take into account the specific objectives or the particular needs of any specific person who may receive this information. You should consult an independent financial, legal, accounting, tax, or other advisor as may be appropriate regarding the subject matter herein. MUFG Bank, Ltd. hereby disclaims any responsibility to you concerning the content herein and do not warrant the accuracy of the content for any particular purpose and expressly disclaim any warranties of merchantability or fitness for a particular purpose. Neither the author nor MUFG Bank, Ltd. have independently verified the accuracy of this content, and such information may be incomplete or condensed and is provided “AS IS”.

references 1.

BIS. (2013). Principles for Effective Risk Data Aggregation and Risk Reporting. https://www.bis.org/publ/bcbs239.pdf

2.

Brown, C.O., Dinç, I.S. (2011). Too Many to Fail? Evidence of regulatory forbearance when the banking sector is weak. Review of Financial Studies, 24(4), 1378-1405. https://doi.org/10.1093/rfs/hhp039

3.

Gillet, T., Lajkep, K. (2020). Implementing BCBS 239, what does it take? Finalyse. https://www.finalyse.com/blog/ implementing-bcbs239

4.

FRB. (2011). Supervisory Guidance On Model Risk Management. https://www.federalreserve.gov/supervisionreg/srletters/ sr1107a1.pdf

5.

FSFP. (2021). How Data Governance is Essential to Managing Data Risk. https://www.firstsanfranciscopartners.com/ blog/how-data-governance-is-essential-to-managing-data-risk/

6.

Martins, J., Mamede, H. S., & Correia, J. (2022). Risk compliance and master data management in banking - A novel BCBS 239 compliance action-plan proposal. Heliyon, 8(6), e09627. https://doi.org/10.1016/j.heliyon.2022.e09627

Intelligent Risk - November 2023

37


peer-reviewed by Jammi Rao author Mayank Goel Mayank is a Vice President - Data Governance Compliance Manager at MUFG Bank. He has over 13 years of experience, majority of which is in the field of bank risk management. In his current role, he works in the Financial Crimes Compliance second line risk management function. Prior to joining MUFG Bank in 2017, Mayank was a Data & Analytics (Financial Services Office) Manager at EY where he worked with large US and European banks. He has extensive experience working with the business to mitigate risks and solve complex data problems. He has proven execution skills on visible regulatory remediation matters and enjoys working with cross-functional stakeholders, from senior executives to examiners and auditors.

38

Intelligent Risk - November 2023


Synopsis Cyber governance is emerging as a distinct and concerning part of cyber risk management and increasingly the subject of international standardization. Boards especially are confronted with questions around cyber oversight and how to guide companies in building resilience. This article touches on three of the key aspects of cyber governance for board members.

the essence of cyber governance: biggest questions for board members

by Ming-Chang (Bright) Wu introduction Cyber governance has emerged as one of the most concerning issues within cyber risk management; moreover, it’s become entirely independent. Based on the existing standard of IT governance (ISO 38500), ISO 27014 on Governance of information security updated its version of cyber governance in 2020. In harmony with cyber risk oversight, the Security and Exchange Commission (SEC) in its Statement and Guidance on Public Company Cybersecurity Disclosures greatly emphasized cyber governance. The National Institute on Standards of Technology did the same in the emerging version 2.0 of its Cybersecurity Framework (CSF). Whether in USA or Taiwan, developing and implementing effective cybersecurity governance has become urgent. This year, this topic was designed as a three-hour mandatory cybersecurity training for board members of publicly listed companies, organized by the Taiwan Corporate Governance Association. In presenting recent trainings and seminars, the top three most relevant questions raised are about cyber governance roles and responsibilities, core topics, and organizational values.

roles and responsibilities Are the roles of cyber governance and cyber management different? If the role of cyber governance is like an orchestra conductor, the role of cyber management is much like an opera director. The conductor expresses music, controls the track tempo of musicians, and synchronizes their performance. The director communicates between musicians, stage management, and the production team, including performance arrangement, theater lighting and sound, and the audience’s perspective.

Intelligent Risk - November 2023

39


It is obvious that those two roles should differ. However, they are often mixed up together. With limited resources, cyber management staff often take care of some cyber governance issues; or, cyber governance roles cover the unfinished business of cyber risk management. Unless two different scopes are clarified, it will cause confusion. Does cyber governance have a clear scope?

core topics Governance, risk, and compliance (GRC) are universally the core topics of cyber governance. Despite a GRC field existing for decades, cyber governance is distinct regarding governance, risk, and compliance. The first issue is the variation across standards. The NIST’s CSF and the Information Security Management System (ISO 27001) standard are the most widely implemented guidelines for cyber risk management. However, a manufacturing business unit might implement a different cyber risk management guideline for its factory’s cybersecurity, such as IEC/ISA 62443 Cybersecurity Management System. With the emerging role for cyber oversight in standards, the company may now find an inter-relationship between those different guidelines which can cause cross-functional issues and a lack of organizational synchronization on cyber governance. Secondly, cyber risks are not only for IT departments but also for firmwide units such as R&D, manufacturing, finance, and procurement. For example, legacy software is the most critical source of cybersecurity-related issues. For example, Windows XP is still installed on 60%-70% of new fabrication equipment purchased over the past three years, even after a significant semiconductor incident in 2018. Namely, legacy software is not only an issue for a single company but also an issue for the supply chain. To face this, a new semiconductor standard, SEMI E187, was released in 2022. The new standard is just a first step. In order to solve this, a long-term cyber strategy is needed. Cybersecurity depreciation, like hardware depreciation in production, needs to be fully addressed by board members and executive management. In addition to complex supply chain issues, other related cyber risk issues exist, such as the software development life cycle, operational technology, cyber risk appetite, and cybersecurity outsourcing. This will gradually gain more and more attention from international clients and local government and those engaged in business contracts and compliance. Finally, there are different kinds of cyber compliance regimes such as international standards and government regulations. ISO 27001 is not the only international standard. There are also privacy, operational technology (OT), and industry-specific standards such as automobile, cloud, and medical cybersecurity. As client requirements, government regulations, and cybersecurity practices evolve, compliance needs updated versions of different cybersecurity standards, their mappings and differences. These changes must be aligned with departments and those managing the implementation of cybersecurity standards.

40

Intelligent Risk - November 2023


organizational values In addition to roles, responsibilities, and the core topics of GRC, cyber governance needs to go beyond existing viewpoints on cyber management. Whether it comprises a cybersecurity board or committee, an established cyber governance framework demonstrates strong organizational determination from top management. However, cybersecurity leadership goes beyond normal management practices. Due to its duality of pressing and enduring issues, cybersecurity leadership needs both transactional and transformative qualities. There is room for board members to tackle cybersecurity without creating departmental silos. Cyber oversight is the key to solving cross-function cybersecurity issues and challenges within cyber governance. Note: The Chinese version was originally published in Taiwan’s Commercial Times in May 2023 (https://view.ctee.com.tw/business/49811.html) and revised in English. The views are the author’s own and do not necessarily reflect those of his organization.

author Ming-Chang (Bright) Wu Ming-Change is the current Semiconductor Cybersecurity Committee Member at SEMI Taiwan and Member for Electronic Engineering National Standards Technical Committees by Bureau of Standards, Metrology and Inspection (BSMI), Ministry of Economic Affairs, Taiwan. With his practical experience on disaster recovery, his resilience book was selected as the 2020 “Book of the Month” recommended by the National Academy of Civil Service in Taiwan. For cybersecurity standards, his role is that of assessment consultant and he works to integrate IT, cybersecurity and OT based on multiple specification standards. Since 2019, he is a qualified speaker on cyber governance for the Taiwan Corporate Governance Association and building a bridge between cyber risk management, information security management system (ISMS) and information security governance (ISG). He is the 2023 ISC2 Mid-Career Award Winner (APAC) and ISC2 Taipei Chapter Ambassador. He also contributes a column for Bloomberg Businessweek Chinese, EETimes, ISSA Journal, and SEMI Standards Watch and his work has been translated into English and Japanese.

peer-reviewed by Carl Densem

Intelligent Risk - November 2023

41


Synopsis Net present value analysis is a mainstay of evaluating long-term investment decisions in the real economy, however it suffers from weaknesses. The author proposes a framework for improving the accuracy of the NPV by incorporating risk management assessment outcomes.

optimizing net present value using the risk-based cumulative construction methodology

by Teryukhov Viktor Evgenievich introduction The main objective criterion for evaluating a long-term investment’s effectiveness, including for investment projects, is the corresponding net present value (NPV). The use of NPV is a simple but effective approach to decide whether an investment produces additional value (positive NPV) or not (negative NPV). One of the fundamental variables for calculating NPV is the discount rate, without which the use of the NPV model for evaluating efficiency loses any meaning. Some businesses use the weighted-average cost of capital (WACC) discount rate for simplicity. As a rule, this WACC model is applied in financial markets by taking into account the market value of equity and the cost of debt capital, where the discount rate for the project is the weighted-average of these. Moreover, the cost of equity is the socalled “rate of return on the securities of the company’s portfolio”, which is not relevant to enterprises in the real sector of the economy. In addition, the WACC model excludes the possibility of minimizing and optimizing the discount rate.

applying npv in the real sector When assessing the discount rate of target projects in the real sector of the economy, a cumulative construction model (“build-up” approach) is used, where the discount rate is an integral assessment of project risks with the possibility of minimizing and optimizing the latter in order to increase the calculated NPV.

42

Intelligent Risk - November 2023


Also, when assessing the relevant risks, generalized criteria are often proposed based on the subjective judgment of the responsible person, such as determining risks as “low”, “medium”, “high”, “insignificant”, “acceptable” or “critical”. These risk assessment criteria are not objective and cannot be used as variables in the quantitative assessment of project effectiveness, in particular NPV. The practical application of this technique is the determination of NPV and corresponding balance in the development of the following: •

budget plans, business plans and financial models

investment measures, plans, projects, programs

programs of organizational and technical development

feasibility studies

The main goal is to increase the efficiency of the enterprise, and individual projects, through an objective assessment and optimization of the appropriate discount rate. The below schematic shows this process:

Figure 1: Basic Algorithm Cumulative Construction Model

Intelligent Risk - November 2023

43


collection and analysis of information At the first stage, the collection and analysis of relevant internal and external information regarding possible risks is carried out (preferably for the last 3-5 years). Internal information includes: •

Business plan for this project, with a detailed description of the current state and projected outlook, description of the relevant assets, resources and financial flows

Marketing analysis

Credit history (if available)

Precedents of mutual claims with contractors

Base of management accounting

Accounting and audit reporting

Instructions from supervisory authorities

Personnel interviewing

External information comprises relevant information, including statistical information, on similar enterprises and projects in a given industry and/or area.

identification and assessment of risks Based on this information, possible risks are identified. To quantify the identified risks, the latter are differentiated into systemic and non-systemic: Systemic risks are risks of a macroeconomic nature (inflationary, political, interest rate, etc.). These can be defined as a risk-free discount rate in the domestic financial market, for example, as the yield on debt obligations of the most reliable state or corporate issuers, but not less than the level of projected industry inflation. Non-systemic risks are specific risks specific to a given enterprise/project. Defined discretely as additional risk premiums. As a result, the integral quantitative risk assessment (discount rate) is determined by the cumulative construction method (“build-up” approach):

Where: R(b) is the assessment of systemic risks (risk-free discount rate) ER(i) is the sum of non-systemic risk assessments (additional risk premiums) 44

Intelligent Risk - November 2023


For each non-systemic risk, appropriate statistical and actuarial analyses and assessments are performed, including by frequency and size of losses/damages. The quantitative assessment of each identified non-systemic risk is carried out according to the following model:

Where: s is average loss per one case S is the maximum possible loss q is the probability of a loss v is the coefficient of variation In the absence of the necessary information a qualitative risk assessment is allowed, for example by the method of expert assessments.

definition of risk minimizing measures The next step is to determine measures to minimize/cover the identified and assessed non-systemic risks in order to reduce the discount rate. As a rule, the main measures are: •

Risk elimination (special additional works and/or rejection of risky measures, structures, technologies, etc.)

Risk minimization (diversification, monitoring, control, supervision, inspection, instruction, training, organization, availability of special services, etc.)

Preservation of risk (ignoring risk or creating a special reserve fund to cover possible losses - selfinsurance)

Transfer of risk (contract clauses, hedging, guarantees, guarantees, insurance, etc.)

At the same time, the costs of implementing these measures have their own optimum point as shown below.

Intelligent Risk - November 2023

45


correlation of NPV and risk minimizing cost At the first stage, the collection and analysis of relevant internal and external information regarding possible risks is carried out (preferably for the last 3-5 years). Internal information includes:

Figure 1: NPV and Risk Minimizing Cost

If the final discount rate is high enough or the necessary acceptable profitability of the business is not reached, then it is possible to reconsider the use of the relevant assets and resources. A simplified example is shown next. Simplified example

46

Intelligent Risk - November 2023


peer-reviewed by Carl Densem author Teryukhov Viktor Evgenievich Head of department at the scientific and educational Institute for the Preservation of Joint Stock Property in St. Petersburg, Russia.

Intelligent Risk - November 2023

47


Synopsis In this study* we investigate alternative interpretable machine learning (IML) models in the context of probability of default (PD) models for the large corporate asset class. IML models have become increasingly prominent in highly regulated industries where there are concerns over the unintended consequences of deploying black box models that may be deemed conceptually unsound. In the context of banking and in wholesale portfolios, there are challenges around using models where the outcomes may not be explainable, both in terms of the business use case as well as meeting model validation standards. We compare various IML models using a long and robust history of corporate borrowers. We find that there are material differences between the approaches in terms of dimensions such as model predictive performance and the importance or robustness of risk factors in driving outcomes, including conflicting conclusions depending upon the IML model and the benchmarking measures considered. These findings call into question the value of the modest pickup in performance with the IML models relative to a more traditional technique, especially if these models are to be applied in contexts that must meet supervisory and model validation standards.

alternative interpretable machine learning models: an application to corporate probability of default and a benchmarking analysis

by Michael Jacobs, Jr. a brief survey of ml modeling in corporate credit risk modeling ML models and algorithms have become predominant in several industries, notably including credit risk management. Following a long period of resistance from supervisors and model risk managers, there is at present a transition from academia to credit risk practice, ranging from model development to various other applications in this domain. Since with this movement comes a new set of uncertainties and other difficulties (e.g., transparency around what drives model outcomes), the current focus of research is the design of ML models that meet business requirements and supervisory expectations. A view gaining acceptance is that there is no bright line between ML and traditional statistical models, considering that even standard regression models may be extremely complex and not readily interpretable (Breeden, 2021). Some examples include factor variables, spline approximations, interaction terms and numerous descriptive input variables. * / Please see here for the complete paper. 48

Intelligent Risk - November 2023


Therefore, it can be argued that what distinguishes ML from traditional statistical algorithms are optimization methodologies developed long ago in the context of fields apart from where we have applied standard econometric models. These include techniques such as bagging, boosting and random forests that are related to so-called ensemble methods (Clemen, 1989; Maclin and Opitz, 1999). We can gain further insight into these differences by considering the taxonomy proposed by Harrell (2018): •

Uncertainty Statistical models specify a probability law governing the data generation process that induces the model uncertainty.

Structure A parametric specification is typically imposed in statistical models, for example the linearity of the target variable or the parameter estimate with respect to the explanatory variables.

Empiricism ML is more empirical, allowing for high-order interactions that are not pre-specified, whereas statistical models have identified parameters of particular interest.

In the case of credit risk modeling, particularly for non-retail asset classes such as commercial and industrial, the datasets at hand are usually limited in depth. This is illustrated by the survey of credit scorecard models performed by Baesens et al. (2015), who find that in about 90% of the studies reviewed there were less than 10,000 observations. This stands in contrast to other “big data” domains where there is more emphasis on extreme non-linearities, such as image processing (Hinton et al. 2012) and natural language processing (Collobert and Weston, 2008). That said, ML has gained some traction in small dataset settings, for example through emphasizing concepts such as model robustness or the use of simplified interaction effects (Breeden, 2021). While ML techniques are generating traction in the wider domain of credit risk, the gains have been relatively more limited in PD modeling, and even less so in wholesale as contrasted to retail asset classes. Non-wholesale PD applications have been in areas such as alternative banking channels (Abdulrahman et al. 2014), social media (Allen et al. 2020) or mobile phone use (Bjorkegren and Grissen, 2020). Indirect applications in PD modeling include ML algorithms that preprocess deposit histories (American Banking Association, 2018), that create input factors that are used in traditional methods such as logistic regression. In looking at applications of ML to risk beyond credit, we find methodologically kindred areas such as fraud detection (Zhou et al. 2018) or anti-money laundering (Cheng et al. 2020). The U.S. banking supervisors issued a request for information and comment on the use of ML (U.S. Banking Regulatory Agencies, 2021) where one of the top questions relates to lack of explainability (also termed interpretability) in some ML approaches and applications (e.g., fair lending). Furthermore, a less transparent and explainable approach might result in difficulties in evaluating the conceptual soundness of a model, a critical expectation for model risk management as prescribed in SR11-7/OCC11-12 (U.S. Banking Regulatory Agencies, 2011). In August 2021, the Office of the Comptroller of the Currency (OCC) handbook on model risk states explicitly that if a bank is using ML models, examiners should assess if model ratings take transparency and explainability into account as key considerations in effective risk management regarding the use of complex models. While black box ML are by definition neither transparent nor explainable, there are model-agnostic methods like locally interpretable model-agnostic explanations (Guestrin et al. 2016) and Shapley additive explanations (Lee and Lundberg, 2017) that provide approximate explanations, but these have many potential pitfalls (Bischl et al. 2020). Intelligent Risk - November 2023

49


Analogous to the “no free lunch” theorem, there does not exist such a model-agnostic, one-size-fitsall concept of explainability. Extensive academic literature criticizes the uncritical applications of these methods (Friedler et al. 2020, Rudin 2019, Hilgard et al. 2020). However, there is a class of inherently interpretable ML models with model-based explainability, where this means that the model is transparent and self-explanatory (Sudjianto and Zhang, 2021). It is argued by Sudjianto et al. (2021a) that the inherent interpretability of a complex model ought to be induced from practical constraints and they extend this to a framework for qualitatively assessing the interpretability of inherently interpretable ML models. Discussions of credit scoring usually carry an implication of consumer loans and large volumes of training data, whereas modeling corporate defaults and bankruptcies is a similar problem, but with fewer events in the training data and less standardized inputs. While standardizing diverse and heterogenous inputs may be one of the best uses of ML in corporate lending applications, some large datasets on corporate defaults do exist, and a variety of papers have been published to apply ML to the problem (Ahmadi and Vahid 2016, Anagnostou et al. 2020). There is a distinction between bankruptcy and default, and since the former are public the focus of most research is on modeling that event (Odom and Sharda 1990, Coats and Fant 1993, Mckee 2000, Lee and Min 2005, Vassiliou 2013), with methods tested covering all ML techniques. Models for lending to small and medium-sized enterprises (SMEs) fall in between consumer and commercial approaches, because the performance is more closely tied to a small group of owners. Although less data is available for SMEs there has been some research in applying ML in this domain (Kolehmainen et al. 2016, Wang et al. 2017), with findings commercialized by the novel fintech industry in this market leveraging ML methods and alternative data sources. In this study we investigate alternative interpretable machine learning (IML) models in the context of probability of default (PD) models using a dataset of corporate borrowers. IML models have become increasingly prominent in highly regulated industries where there are concerns over the unintended consequences of black box models that may be deemed conceptually unsound. In the context of banking and in wholesale portfolios, there are challenges around deploying models where the outcomes may not be explainable, both in terms of the business use case as well as meeting model validation standards. We compare various IML models, including standard approaches such as logistic regression, using a history of corporate borrowers sourced from Moody’s studied in Jacobs (2022a, 2022b). This is comprised of around 200,000 quarterly observations from a large population of rated larger corporate borrowers (at least USD $1 billion in sales and domiciled in the U.S. or Canada), spanning the period from 1990 to 2015. The dataset is comprised of an extensive set of financial ratios and macroeconomic variables as candidate explanatory variables.

high points of the model benchmarking analysis – estimation and model performance We now present the benchmarking analysis of alternative IML models: • 50

rectified linear unit deep neural network (ReLU-DNN)

Intelligent Risk - November 2023


generalized additive model with structured interaction (GAMI-Net)

explainable boosting machine (EBM)

logistic regression model (LRM) as a baseline

The following are the categories and names of the explanatory variables appearing in the final candidate model: •

Size Change in Total Assets (CTA)

Leverage Total Liabilities to Total Assets Ratio (TLTAR)

Coverage Cash Use Ratio (CUR)

Efficiency Net Accounts Receivables Days Ratio (NARDR)

Liquidity Net Quick Ratio (NQR)

Profitability Before Tax Profit Margin (BTPM)

Macroeconomic S&P 500 Equity Price Index Quarterly Average Annual Change (SP500EP), Consumer Confidence Index (CCI)

Table 1: ML Model Benchmarking PD Modeling Analysis – Comparison of AUC and Accuracy Performance Measures (Moody’s Large Corporate Financial and Macroeconomic Explanatory Variables 1-Year Default Horizon Model)

This analysis is developed using the Python PiML Toolbox (Sudjianto et al. 2023). In Table 1 we show the comparison of the AUC and Accuracy (defined as the number of true positives and negatives over the number of observations) discriminatory power performance and measures. We can see that the IML models all demonstrate some pickup in AUC performance, albeit the degree of improvement is modest, especially on an out-of-sample basis, ranging in the 2-5% (1-2%) range in testing (training) samples. ReLU-DNN (EBM) shows the greatest increase (decrease) out-of-sample of 1.4% (0.4%), while EBM (ReLU-DNN) shows the greatest (least) increase in-sample of 5.6% (1.9%). This suggests that, on this basis, EBM (ReLU-DNN) is most (least) prone to overfitting. On the other hand, according to the Accuracy measure EBM performs best on both an in- and out-of-sample basis, whereas in the training (testing) sample LRM (GAMI-Net) performs worst, which leads to a rather different con-clusion than AUC. That said, AUC is a preferred measure in PD classification applications, so we have more confidence in the conclusions based upon AUC. Intelligent Risk - November 2023

51


high points of the model benchmarking analysis – factor importance across risk factors and measures We now consider the concept of factor importance (FI) in more detail. The FI numeric value is in-terpreted as a score where a higher value indicates more importance of the factor. The concept is similar to that of the magnitude of a regression coefficient in OLS, or a sensitivity measure in other kinds of models, but note that there are various concepts of FI that provide slightly different information. FI is a commonly used as a tool for ML model interpretability, as from the FI scores it is in principle possible to explain why a ML model makes particular predictions and how we can manipulate features to change its predictions. We consider the following types of FIs: •

Range FI (RANGE-FI) is commonly used in the industry in LRM applications and serves as our baseline. The RANGE-FI is a post hoc explanation defined for each risk factor as the coefficient estimate multiplied by the range of the risk factors, scaled by the sum of the ranges of all the risk factors:

where is the coefficient estimate corresponding to each risk factor While this gives us a sense of how influential a factor is as part of fitting the model, this measure is sample dependent as it only gives a particular view of importance, and furthermore does not measure sensitivity to the factor or how the factor contributes to some measure of model fit.

52

Permutation FI (PERM-FI) post hoc local explanations measure the influence of individual risk factors on the model prediction that calculates the increase in a loss measure (in this context of PD modeling the AUC) when they are permuted. When a factor value is randomly shuffled, the relationship between the feature and the target is broken, and the resulting drop in model performance indicates the feature’s significance. However, as different models can have very different FI rankings, this measure only reveals the importance of each feature to that specific model.

Shapley FI (SHAP-FI) additive post hoc local explanations are an ML tool that can explain the output of any model by computing the contribution of each feature to the final prediction based upon coalition game theory. SHAP-FI values tell us how to fairly distribute a “payout” (or the model prediction) among the coalition of features as the average marginal contribution of a feature value across all the possible combinations of features. SHAP-FI possesses several attractive properties, such as local accuracy, missingness and consistency.

Local Interpretable Model Agnostic FI (LIME-FI) post hoc local explanations are a model agnostic explanation tool. The procedure underlying this measure involves creating a surrogate interpretable model, such as a Lasso or decision tree, to explain how the original model makes predictions for

Intelligent Risk - November 2023


a given input sample. The algorithm creates a simulated dataset by randomly perturbing the input sample with noises, evaluates the output of the original model on these perturbed samples and then fits an interpretable model (Lasso in our case) on this simulated data, with weights assigned to each perturbed sample based on its proximity to the original input sample. •

Global FI (GLOB-FI) is an inherent explanation that measures the global relative importance of each feature calculated by measuring the variance of the marginal effect on the training dataset. In the case of categorical features, we aggregate the marginal effects of all of its dummy variables and then calculate the variance. Therefore, the GLOB-FI provides a measure of how much the feature contributes to the overall variability in the model’s predictions. In order to interpret the relative importance of each feature as a proportion of the total importance across all features, we normalize the feature importance so that their sum equals 1:

We show a comparison of FI measures across the three IML models and the LRM below in Table 2. While the overarching observation is the complete lack of consistency across both models and FI measures, we note that overall the SHAP-FI and LIME-FI measures are least inconsistent across models. Table 2: ML Model Benchmarking PD Modeling Analysis – Comparison of Factor Importance Measures (Moody’s Large Corporate Financial and Macroeconomic Explanatory Variables 1-Year Default Horizon Model)

Intelligent Risk - November 2023

53


We only show RANGE-FI for the LRM as that is our baseline and not computed for the IML models in the Pi-ML package that we use. In the table we color code the rank ordering in the font such that the top factor is dark red, the 2nd is in red, 3rd is in orange, 4th in black and the lowest four are in shades of gray that are fainter in decreasing importance. As we noted previously in the LRM estimation results the top three factors under RANGE-FI are NARDR, BTPM and CUR, which we deem to be a conceptually sound outcome for this type of point-in-time (PIT) PD model where the factors that are expected to have more importance should span the dimension reflecting more short term default risk (borrower profitability, liquidity or cash management), in contrast to through-the-cycle (TTC) PD more suitable for credit underwriting, where the latter tend to place importance on longer term dimensions of default risk (i.e., capital structure, size or debt service coverage.) In the case of the other IML models or other FI measures, such factors do not consistently show up as having the most importance, and the rank ordering are rather different depending on the combination of model and FI measure.

conclusion In this study, we have investigated alternative IML models in the context of PD models applied to the large corporate asset class. This is motivated by the fact that IML models have become increasingly prominent in highly regulated industries where there are concerns over the unintended consequences of black box models that may be deemed conceptually unsound. In the context of banking and wholesale portfolios, we have noted the challenges around deploying models where the outcomes may not be explainable, both in terms of meeting business use cases as well as in satisfying model validation standards. We compared various IML models, including standard approaches such as logistic re-gression, using a history of corporate borrowers sourced from Moody’s, a dataset used in Jacobs (2022a, 2022b). While there have been significant advances in IML that have opened the possibility for ML models to be more used in domains of heightened supervisory scrutiny, such as in credit risk modeling, these findings suggest the industry may have a long way to go before this aspiration is fully realized. Furthermore, as can be seen in looking at the deep and rapidly growing literature on IML models, there are still many theoretical and technical questions that are yet unanswered, such as the interpretability measures and IML model variants that best achieve the promised objective of IML. Given these considerations, depending upon the application of the PD model, we counsel practitioners to proceed cautiously in putting IML models into production in a champion capacity until these issues are resolved. That said, in spite of these limitations, we see scope for applying IML models in development or validation capacities as challengers or benchmarks as part of model testing and evaluation.

references Abdulrahman, U. F. I., Panford, J. K., and J. B. Hayfron-Acquah. 2014. Fuzzy Logic Approach to Credit Scoring for Micro Finances in Ghana: A Case Study of KWIQPLUS Money Lending. Interna-tional Journal of Computer Applications 94 (8): 11–18. Ahmadi, A., and P.R. Vahid. 2016. Modeling Corporate Customers’ Credit Risk Considering the Ensemble Approaches in Multiclass Classification: Evidence from Iranian Corporate Credits. The Journal of Credit Risk 12(3): 71–95.

54

Intelligent Risk - November 2023


Allen, Linda, Peng, Lin, and Yu Shan. 2020. Social Networks and Credit Allocation on Fintech Lending Platforms. American Banking Association 2018. New Credit Score Unveiled Drawing on Bank Account Data. ABA Banking Journal, Blog Post, October 22. Anagnostou, I., Kandhai, D., Sanchez Rivero, J., and S. Sourabh. 2020. Contagious Defaults in a Credit Portfolio: A Bayesian Network Approach. The Journal of Credit Risk 16(1): 1–26. Baesens, B., Lessmann, S., Seow, H.V, and L.C. Thomas. 2015. Benchmarking State-of-the-Art Classification Algorithms for Credit Scoring: An Update of Research. European Journal of Opera-tional Research 247(1): 124-136. Bischl, Bernd, Casalicchio, Giuseppe, Dandl, Susanne, Freiesleben, Timo, Grosse-Wentrup, Moritz, König, Gunnar, Herbinger, Julia, Molnar, Christoph Moritz, and Christian A. Scholbeck. 2020. Pitfalls to Avoid When Interpreting Machine Learning Models. Working Paper, University of Vienna. Bjorkegren, D., and D. Grissen. 2020. Behavior Revealed in Mobile Phone Usage Predicts Credit Repayment. World Bank Economic Review 34(3): 618–634. Breeden, Joseph L. 2021. A Survey of Machine Learning in Credit Risk. Journal of Risk Model Validation 17(3): 1–62. Cheng, X., Hooi, B., Huang, H., Han, X., Li, X., Li, Z., Liu, S., and C. Shi. 2020. Flowscope: Spotting Money Laundering Based on Graphs. In Proceedings of the AAAI Conference on Artificial Intelligence 34(4): 4731–4738. Clemen, R. 1989. Combining Forecasts: a Review and Annotated Bibliography. International Journal of Forecasting 5(4), 559–583. Collobert, R., and J. Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proceedings of the 25th International Conference on Neural Information Processing Systems 1: 160–167. Association for Computing Machinery, New York. Friedler, Sorelle, Kumar, Elizabeth I., Scheidegger, Varlos, and Suresh Venkatasubramanian. 2020. Problems with Shapley-ValueBased Explanations as Feature Importance Measures. In International Conference on Machine Learning Research: 5491–5500. Guestrin, Carlos, Ribeiro, Marco Tulio, and Sameer Singh. 2016. “Why Should I Trust You?” Ex-plaining the Predictions of any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 1135– 1144. Harrell, F. E., Jr. 2018. Road Map for Choosing Between Statistical Modeling and Machine Learning, Statistical Thinking, Blog Post. Hilgard, Sophie, Lakkaraju, Himabindu, Jia, Emily, and Sameer Singh. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society: 180–186. Hinton, G. E., Krizhevsky, A., and I. Sutskever. 2012. Imagenet Classification with Deep Convolu-tional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems 1: 1097–1105. Curran Associates, Red Hook, NY. Kolehmainen, M., Li, K., and J. Niskanen. 2016. Financial Innovation: Credit Default Hybrid Model for SME Lending. Expert Systems with Applications 61: 343–355. Lee, Y. C., and J.H. Min. 2005. Bankruptcy Prediction Using Support Vector Machine with Optimal Choice of Kernel Function Parameters. Expert Systems with Applications 28(4): 603–614. Lee, Su-In, and Scott M. Lundberg. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems: 4768–4777. Maclin, R., and D. Opitz. 1999. Popular Ensemble Methods: An Empirical Study. Journal of Artificial Intelligence Research 11(3): 169–198. Mckee, T. E. 2000. Developing a Bankruptcy Prediction Model via Rough Sets Theory. Intelligent Systems in Accounting, Finance and Management 9(3): 159–173. (Odom and Sharda 1990) Odom, M. D., and R. Sharda. 1990. A Neural Network Model for Bankruptcy Prediction. In Joint Conference on Neural Networks: 163–168. IEEE Press, Piscataway, NJ. Available online: https://ieeexplore.ieee.org/abstract/ document/5726669 (accessed on May 13th, 2023). Rudin, Cynthia. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence 1(5): 206–215.

Intelligent Risk - November 2023

55


Sudjianto, Agus, and Aijun Zhang. 2021. Designing Inherently Interpretable Machine Learning Models. In Proceedings of ACM ICAIF 2021 Workshop on Explainable AI in Finance. ACM, New York, NY. Sudjianto, Agus, Yang, Zebin, and Aijun Zhang. 2021. Enhancing Explainability of Neural Networks through Architecture Constraints. EEE Transactions on Neural Networks and Learning Systems 32(6): 2610–2621. The U.S. Office of the Comptroller of the Currency and the Board of Governors of Federal Reserve System. 2011. SR 11-7/OCC1112: Supervisory Guidance on Model Risk Management. Washington, D.C. The U.S. Office of Comptroller of the Currency, the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and the National Credit Union Administration. 2021. Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning. Washington, D.C. The U.S. Office of the Comptroller of the Currency. 2021. Comptroller’s Handbook on Model Risk Management. Washington, D.C. Vassiliou, P. C. 2013. Fuzzy Semi-Markov Migration Process in Credit Risk. Fuzzy Sets and Systems 223: 39–58. Wang, G. J., Yan, X. G., Zhu, Y., and C. Xie. 2017. Comparison of Individual, Ensemble and Integrated Ensemble Machine Learning Methods to Predict China’s SME Credit Risk in Supply Chain Finance. Neural Computing and Applications 28(1): 41–50. (Jacobs Jr. 2022a) Jacobs Jr., Michael. 2022. Quantification of Model Risk with an Application to Probability of Default Estimation and Stress Testing for a Large Corporate Portfolio, Journal of Risk Model Validation 15(3): 1-39. Available online: https://www. michaeljacobsjr.com/wp-content/uploads/2023/04/Jacobs_Corp_PD_Loan_Lvl_Str_Tst_Mdl_Risk_Quant_JRMV_2022_vol15_ no3_pp1-39.pdf (accessed on April 2nd, 2022). (Jacobs Jr. 2022b) Jacobs Jr., Michael. 2022. Validation of Corporate Probability of Default Models Considering Alternative Use cases and the Quantification of Model Risk, Data Science in Finance and Economics 2(1): 17-53. Available online: https://www. michaeljacobsjr.com/wp-content/uploads/2022/05/Jacobs_Corp_PD_Val_Alt_Use_Cases_Mdl_Risk_Qnt_DSEF_2022_vol2no1_ pp17-53.pdf (accessed on April 2nd, 2022). (Jacobs Jr 2023) Jacobs, Jr., M., 2023, Benchmarking alternative interpretable machine learning models for corporate probability of default, Working paper. https://www.michaeljacobsjr.com/wp-content/uploads/2023/07/Jacobs_Corp_PD_IML_Benchmarking_20230701_V51.pdf

Note: The views expressed herein are those of the author and do not necessarily represent a position taken either by PNC Financial Services group or of any affiliated firms.

author Michael Jacobs, Jr. Michael Jacobs, Jr., Ph.D., CFA, is Senior Vice-President and Lead Modeling Expert, PNC Financial Services Group, Model Development Group.

peer-reviewed by Carl Densem, Gary Van Vuuren 56

Intelligent Risk - November 2023


Synopsis The volume of data available to corporations is continuously expanding and poses key challenges for risk managers. This article outlines the problem, discusses potential solutions and proposes a potentially promising one: a mix of data repository and “Data From Source.”

agile approaches to big data for risk managers

by Kelly Allin & Doug Downing introduction The volume of data available to businesses today is staggering. Each minute, users on various platforms are generating an enormous amount of data − from sharing messages to posting tweets, uploading videos, and exchanging messages. Furthermore, projections suggest that the total amount of data created worldwide will reach an astounding 181 zettabytes by 2025, with the number of connected IoT devices exceeding 30 billion by 2030 (Duarte, 2023). The big data industry itself is expected to witness exponential growth, with projected size of over USD $745 billion by 2030 (Fortune Business Insights, 2023). With such vast quantities of data being produced, businesses must address the management of big data effectively.

introduction For risk managers, the availability of huge quantities of data may bring to life the idiom “too much of a good thing.” While more data is generally considered desirable, it comes with associated costs, such as storage costs and the length of time and cost required to process and analyze data. Traditionally, data repositories have been seen as critical in managing big data. They offer centralized storage, data integration, analytic capabilities, historical data preservation, and more. However, implementing and maintaining these repositories can often be expensive and time-consuming, as they require costly setup, programming and development, hardware, and ongoing maintenance costs. Concerns about data repositories include: •

a significant percentage of data repository projects fail to meet expectations, with estimates from 70% (Reese) to 85% (Casteleijn, 2021).

the speed of data ingestion cannot keep up with the ever-increasing inflow of new data

Intelligent Risk - November 2023

57


once data is moved to a repository, it quickly becomes outdated, requiring constant reconciliation.

The cost of a data warehouse is dependent on the size of the application / company. But to give some context, one estimate is a starting price of USD $200,000 and going as high as USD $2 million for the starting price (Data Warehouse Cost). With the advent of cloud-based services, a starting price of USD $25,000/ month should be considered. Based on the purpose of a data warehouse (a collection point of data) this price rapidly increases. Another estimate is that 30TB is USD $1 million/year (Perez & Mansourian, 2022).

embracing agile approaches to big data Given these challenges, it is time to re-evaluate the reliance on data repositories as the solution for managing big data. Continuing to pursue an approach that does not yield the desired results may indeed be an exercise in futility. An alternative approach that is gaining traction is accessing data directly from its original sources, known as “Data from Source.” In this context data integration is defined as combining data from disparate sources. Additionally, it can mean the programming required to pass data from source to software. Data from source means something different – raw data from its original source without any intermediary steps such as ETL. This method involves accessing data from a variety of sources, selecting relevant data, transforming it using an in-memory platform, applying master data management (MDM) and data governance, and consuming the transformed data directly or through integrated tools or algorithms. By directly accessing data from source, businesses can benefit from real-time data availability, enhanced data integrity and consistency, simplified architecture, cost efficiency, granular control and flexibility, improved data security, and reduced latency. These benefits are realized because connectivity is to source systems rather than copied data (ensuring real time accuracy), and saves costs related to duplication, data preparation on all data (not just needed data), and storage and maintenance. While concerns exist regarding the data from source approach, such as performance, data quality and consistency, security and privacy, and dependencies on source systems, these concerns are being actively addressed through new technologies, integrated toolsets, data source connections that recognize and update changes, and innovative approaches to deriving value from big data. For example, new technologies now provide real-time connections to data sources that understand the structure of the data - how it is stored, what it is called, versus the actual content. As a result, a user can access data files based on readable understandable names instead of technology acronyms. A useful analogy is the learning of a language via focusing on the grammatical structure versus the individual words.

58

Intelligent Risk - November 2023


agile benefits for risk managers For risk managers, the ability to connect to any source and multiple disparate sources, and work with the streams of data immediately, creates new opportunities to be flexible in making connections, analyzing relationships, identifying and responding to risks, and performing this work ad hoc, without relying on the IT team for a large-scale data project. While on the surface this may cause some stress for CTO’s, the approach drives a much more positive opportunity for the business user and the IT professional in that they are spending their time on value added activities and less time on the difficult and time-consuming efforts in delivering instructions, developing technical solution, and rework. This speed to solution can create a more agile, rapid response to risk in an organization. We recently spoke with a risk leader from a global professional services firm grappling with a significant challenge: obtaining a unified view of client risk across their diverse international operations. Each country’s business unit had its own operating system, complicating efforts to consolidate client exposure data. Resistance to building a centralized data lake/ocean due to issues around where data was to be stored further complicated matters. Differing legal entity names across countries added another layer of complexity, from closely resembling to entirely unrelated names. The goal was to connect these disparate systems, analyze global business activities, and understand intricate ownership structures. This involved extracting specific data fields from various systems and applying custom algorithms to assess risk. The project’s focus on specific data sets and adaptable risk scenario assessments ensured it remained manageable and avoided becoming an overly complex and risky endeavor.

the future of big data management “Data From Source” has shown promising results when compared to traditional data repositories. It offers faster implementation, lower costs, greater efficiency, and higher-quality results through real-time data usage. Despite the observed success and benefits of this approach, it is surprising that more companies have not embraced and experimented with this technology. It is clear that big data management cannot be solely solved by data repositories or data from sources; instead, a combination of approaches, along with the continuous introduction of new technologies, will shape the future of managing big data. Searching for the term Data from Source does not yet yield substantial information. Data visualization provides some information, but this is the outcome of data from source technology. The evidence is in the Use Cases of the vendors providing this technology (up to 80% faster) and the fact that you are removing a number of time consuming and expensive IT based activities.

Intelligent Risk - November 2023

59


As risk managers grapple with the expanding volumes of data, it is crucial to consider alternative approaches beyond data repositories. Embracing the concept of “Data From Source” allows companies to access and utilize data in real-time from its original sources, avoiding the shortcomings of traditional methods. By adopting a combination of approaches and embracing new technologies, organizations will be better equipped to navigate the challenges and maximize the benefits of managing big data effectively. The future of big data management lies not in a single solution, but in the synergy of innovative approaches. Due to the predictions of Big Data, this is compelling for everyone including individuals (think about multiple devices and contact data). Data seems simple. It is not – it is very complex and needs constantly updated technologies and solutions.

references 1. Duarte, F. (2023, April 3). Amount of Data Created Daily (2023). Exploding Topics. https://explodingtopics.com/blog/datagenerated-per-day 2. Fortune Business Insights. (2023, April 28). Big Data Analytics Market Size to Surpass USD 745.15 billion by 2030, at a CAGR of 13.5%. GlobeNewswire. https://www.globenewswire.com/en/news-release/2023/04/28/2657130/0/en/Big-DataAnalytics-Market-Size-to-Surpass-USD-745-15-billion-by-2030-at-a-CAGR-of-13-5.html#:~:text=Big%20Data%20Analytics%20 Market%20is,AI)%20technologies’%20rapid%20adoption. 3. Reese, E. (no date). The Death of Data Warehousing – Why Projects Fail and What Not to do? AllCloud. https://allcloud.io/ blog/the-death-of-data-warehousing/ 4. Casteleijn, G. (2021, December 20). Most data warehouse projects fail. Here’s how not to. Data Science Central. https:// www.datasciencecentral.com/most-data-warehouse-projects-fail-here-s-how-not-to-1/#:~:text=Yet%20despite%20the%20 hours%20and,more%20often%20than%20you%20think 5. Data Warehouse Cost in Brief. ScienceSoft. https://www.scnsoft.com/analytics/data-warehouse/pricing#:~:text=Data%20 Warehouse%20Cost%20in%20Brief&text=The%20requirements%20for%20analytics%20%26%20reporting,%2430K%20to%20 %242M 6. Perez O., Mansourian, A. (2022). The True Cost of Big Data. NMS Consulting. https://nmsconsulting.com/4076/the-true-costof-big-data/#:~:text=What%20are%20the%20true%20costs,the%20hardware%20costs%20are%20high

60

Intelligent Risk - November 2023


authors Kelly and Doug are Directors at r4apps International.

Kelly Allin Kelly Allin has a background in financial reporting and audit and is a former Big Four audit partner. Kelly was involved with many large, complex multi-national businesses, both public and private. He understands the challenges these businesses face in effectively accessing information in their organization and putting that data to use efficiently.

Doug Downing Doug is a former partner in Big Four consulting firms with expertise in large systems implementations. Doug led multiple large scale transformation systems projects. He has seen and experienced the challenges of large data projects.

peer-reviewed by Andrea Calef

Intelligent Risk - November 2023

61


Synopsis It is said that necessity is the mother of invention and nowhere is this truer than small oil nations in a world moving away from fossil fuels towards clean energy. The United Arab Emirates is one example of a country coming up with innovative ways of diversifying its economy by embedding ESG principles with unique approaches. Although it faces challenges, the UAE is nimble and shows the possibilities of climate risk management when faced with the worst outcomes of shifts in global climate.

how a small oil nation is leading on ESG

by Dr. Aakash Ramchand Dil introduction While the global expectation often centers on the United States and the European Union as leaders in the battle against climate change and champions of Environmental, Social, and Governance (ESG) compliance, it is crucial to shift our focus to oil-producing nations, which find themselves disproportionately affected by the consequences of climate change. The United Arab Emirates (UAE) serves as a noteworthy example of an oil-producing nation that is rapidly acknowledging the imperative for transformative action and is actively embarking on a journey to embed ESG principles into its overarching strategies. In this discourse, we will delve into the UAE’s endeavors to confront climate change and adhere to ESG standards, shedding light on their initiatives and commitments in this vital arena.

how the UAE is addressing climate change and ESG compliance: The UAE has embarked on a transformative journey aimed at diversifying its economy away from heavy reliance on oil and gas, instead channeling its focus toward sectors such as renewable energy, technology, and sustainable infrastructure. This strategic economic shift not only aligns with global sustainability objectives but also presents valuable opportunities for the seamless integration of ESG principles into its business landscape. Here, we explore some of the pivotal initiatives and actions undertaken by the UAE. Government Commitment: The UAE government, in close collaboration with regulatory bodies, has played a central role in fostering ESG compliance. The UAE Securities and Commodities Authority (SCA) stands as one of the leading entities in this endeavor, having introduced robust regulations and guidelines. These measures are designed to incentivize companies listed on UAE stock exchanges to make comprehensive ESG-related disclosures. By doing so, this initiative seeks to elevate transparency, accountability, and investor confidence within the UAE’s vibrant market. 62

Intelligent Risk - November 2023


This commitment carries global significance due to the UAE’s pivotal role as a trade and financial hub, making it a key player in the international sustainability arena. Unlike older industrial nations, the UAE’s rapid development allows it to integrate ESG principles from an early stage, leapfrogging others in adopting sustainable practices. Sustainability-Focused Initiatives: The UAE has introduced a range of dedicated initiatives designed to infuse sustainability into its national fabric. Notably, the UAE Vision 2021 and the UAE’s Green Agenda 20152030 have emerged as instrumental programs. These initiatives set forth clear and measurable targets for environmental sustainability. Goals encompass emissions reduction, heightened adoption of renewable energy sources, and the enhancement of resource efficiency, all aligned with global sustainability benchmarks. Sustainable Urban Planning: Demonstrating a commitment to sustainable urban development, the UAE has undertaken ambitious projects exemplified by the Masdar City development in Abu Dhabi. This groundbreaking venture showcases innovative clean technology integration and serves as a living testament to the country’s aspirations for sustainable urban planning. Private Sector Engagement: The private sector within the UAE has proactively embraced sustainability practices, recognizing the importance of ESG factors in shaping their corporate strategies. A notable manifestation of this commitment is the formation of sustainability-focused organizations and industryspecific initiatives. The Dubai Sustainable Finance Working Group and the Emirates Green Building Council are two notable examples. These entities underline the private sector’s dedication to ESG compliance and its readiness to contribute to the nation’s sustainability goals. Investor Demand: Acknowledging the substantial influence of ESG factors on long-term financial viability, investors in the UAE are increasingly expressing interest in ESG-compliant investments. Asset management firms and financial institutions have thus responded by introducing a range of ESG-focused investment products and services. This includes instruments like green bonds and sustainability-linked financing, providing businesses with avenues to raise capital for sustainable projects. What sets the UAE apart in its approach to ESG adoption is a unique convergence of factors that distinguish it from other nations. The UAE’s commitment to ESG is intricately linked to its ambitious diversification efforts, as it seeks to transition to a sustainable, post-oil economy.

challenges to UAE’s ESG strategy Despite the significant strides made by the UAE in its commitment to ESG compliance and sustainability, it faces several notable challenges. Firstly, the UAE’s relatively smaller size and resource base, when compared to more established and economically robust developed nations, can potentially limit its capacity to undertake large-scale sustainability initiatives that require substantial financial and political resources. Nonetheless, this compactness can also be viewed as an advantage, allowing the UAE to respond nimbly to emerging challenges without being bogged down by bureaucratic complexities. Intelligent Risk - November 2023

63


Secondly, as an oil-producing nation, the UAE grapples with issues of credibility in the realm of environmental sustainability. Skepticism may arise regarding the nation’s true commitment to reducing its environmental impact and transitioning away from fossil fuels in favor of renewable energy sources. To address this credibility challenge, the UAE must embark on a transparent and demonstrable path toward environmental sustainability. This involves setting ambitious targets, actively achieving them, curbing carbon emissions, promoting the widespread adoption of renewable energy, and showcasing a genuine dedication to reducing its reliance on oil as a primary energy source. Dubai’s hosting of the upcoming COP28 (or Climate Change Conference) meeting is one demonstration of this.

challenges to UAE’s ESG strategy The UAE’s dedication to ESG compliance and sustainability is commendable, yet it encounters notable hurdles. These challenges, encompassing size and capital constraints as well as concerns about credibility as an oil producer, can be surmounted through innovative solutions, robust policies, and transparent actions. By overcoming these obstacles, the UAE can further establish itself as a forward-thinking nation committed to a sustainable and environmentally responsible future, thereby setting an inspiring example for the global community.

author by Dr. Aakash Ramchand Dil Dr. Dil currently holds the position of Executive/Head of Market Risk and Capital Management at the National Bank of Fujairah PJSC in Dubai, UAE. Prior to his role at NBF, he gained experience in various Quantitative Risk Management and BASEL implementation positions at institutions such as SAMBA Bank Ltd, Union National Bank, Compono Strategia JLT, and Commercial Bank International. His expertise lies in risk analytics, managing Models risk, IFRS9 ECL, bottom-up stress testing techniques, quantitative finance, and financial risk modeling (credit and market). His research focuses on econometric stress testing, risk-based pricing, and ARMA modeling. Dr. Dil possesses a Ph.D. in Mathematics, an Associate Diploma in Actuarial Science, dual bachelor’s degrees in Commerce and IT, and a Master’s degree in Business Administration from York Business School in the UK.

peer-reviewed by Nadia AlQassab

64

Intelligent Risk - November 2023


Synopsis Globally Financial Institutions (FIs) and regulators are at various stages of maturity insofar as climate risk is concerned. Given the dimensions of climate risks which are emerging, the likelihood of becoming the fourth pillar under Basel for risk capital calculation is maturing into a strong case. This article will look at the broad measures that banks are following— and should be following— to assess and mitigate climate risk inherent in their portfolios.

climate risk – fourth pillar in the making

by Venkat Srinivasan introduction What is climate risk, and why is it important? It refers to the risk assessment which draws on formal analysis of the consequences and impact of climate change, responses and adaptation to these changes, and impact of societal and economic factors.

Figure 1: Global Risks Report 2023 - Global risks ranked by severity (World Economic Forum)

Intelligent Risk - November 2023

65


If one looks at global risks ranked by severity (see figure 1), ten years from now more than 70% of the risks will be climate in nature. Those that are not marked as climate risks are primarily tremors that will be felt because of climate change. There are several instances one can see around of the implications of these risks. For example, in July 2021, two days of extreme rainfall over western Germany resulted in damages of €1.3 billion to a railroad company (Associated Press, 2021). The Fukushima Nuclear disaster is another recent example, where the loss was estimated to the tune of US $187 billion. Given this backdrop, what should a regulator’s strategy and approach be to ensure that FIs fulfil their climate neutrality commitments? This becomes essential to be put forth in a concrete structure so that supervision and assessment of this critical requirement is given an objective view from a risk standpoint.

priorities for action Given the potential risks and the magnitude of the task at hand, it is important for regulators to have a well-rounded approach for governing climate risk for regulated entities. It is important for both the regulator and the regulated to understand the interdependence between environmental disturbances and lines of business. Climate risk has an overarching impact on credit, market, liquidity, and operational risk (BIS, 2020). For example, credit risk could be caused due to impairment in value of collateral, liquidity due to a burst in demand following natural climate disasters, and operational predicated on business continuity in the event of damage to assets such as data centres. On the governance front, regulated entities must assign responsibilities of management of climate financial risks starting right at a risk committee level and cascading down to individual functions within the FI. At all levels there should be a good understanding of climate-related risks and how to deal with them. There are several courses which are available for self certification on the financials of climate risk and reporting— respective regulators can propose that key staff should undergo certification as part of capacity building and upskilling. An input to short-term and long-term strategic plans should be to include climate risk as one of the key determinants on decision making. Financial institutions, should include climate risk indicators in their risk appetite assessment. Regulators must fix guidelines on how to measure and assess these risks. For example, concentration of carbon dioxide in the GHG asset or emission levels in the portfolio could be measures to be included. Having done that, the FI should assess at the sector, region, and company the level of climate risk they are carrying and likely to carry in the future. A logical and more objective assessment would be assignment of climate risk ratings to individual portfolios or accounts. Rolling these ratings up to the FI level will also help to conduct accurate climate stress and scenario tests. There should be robust methods of Risk Identification and Assessment at a customer, sector, and portfolio level covering both physical and transition risk, especially for institutions operating in active seismic zones and their respective collaterals.

66

Intelligent Risk - November 2023


Following risk assessment, a natural follow through in process would be risk monitoring. Primarily, this should analyse potential financial impact for the FI’s books of clients more prone to future climate regulations and technological advances. This function, both from an FI and regulator perspective, is extremely data-driven. Therefore, it is important to assess what information an institution’s client has, should have, and must necessarily build to ensure that it fulfils its monitoring and reporting functions. With adequate data on the table from the earlier function, it is important for FIs to determine whether the risks they are carrying on their books is in congruence with their climate risk appetite as laid out by the board in forward-looking strategies. This gets done through accurate risk management by valuing the exposures, concentrations, collateral value, and any other climate Key Risk Indicator (KRI) which can drive the FI’s risk profile. As an outcome, the FI may diversify by geography, pull out of concentrations, move their own data centres from sensitive zones, etc. Risk management, analysis, and what an FI should be doing must be clearly forward-looking from an FI point of view. Assessing the future becomes important, and scenario analysis is one of the current tools to do this activity. Climate scenarios should be identified so an FI can visualize short-, medium- and long-term risks over various horizons, such as ten years. These scenarios should be in the realm of both physical and transition risk, such as reduction in carbon intensity by 30% over the next ten years or, conversely, less reduction in fossil power generation by 30% for an electric utility. Scenarios translate into risk, and mitigation measures into cost for both the FI and companies being financed by the FI. It is extremely important for a regulator or an internal auditor to ensure that all scenarios are relevant, clearly laid out, reflect the data collected, analysed, and reported under statutory requirements where necessary. A good start for FIs may be the Network for Greening the Financial System (NGFS) on Climate Scenarios, which explores a range of plausible climate scenarios for forward-looking climate risks assessment.

conclusion There is a lot of background work to be done by regulators in the financial sector across the areas we have covered, which by itself is not exhaustive. Climate risk is here to stay and bound to be a major contributor to the risk landscape of FIs. Creating a separate pillar for climate risk will provide the attention and focus it deserves from financial markets and institutions. An independent pillar under Basel most certainly is the need of the hour. Whether this happens, time will tell, but the writing is clearly on the wall. Climate events can no longer be regarded as black swans—they are now well within the realm of a standard normal distribution.

Intelligent Risk - November 2023

67


references “German Railway: Floods Caused $1.5 billion Damage to Network.” Associated Press, July 23, 2021. https://apnews.com/article/europe-floods-0637d4aaaa9469595b9df8c98b488a2d “BCBS - Climate related financial risks: a survey on current initiatives.” BIS. April 2020. www.bis.org/bcbs/publ/d502.pdf “Master Circular – Basel III Capital Regulations.” Reserve Bank of India. July 1, 2015. https://www.rbi.org.in/Scripts/BS_ViewMasCirculardetails.aspx?id=9859 Network for Greening the Financial System (NGFS). “NFGS Climate Scenarios.” Available at https://www.ngfs.net/ngfsscenarios-portal

peer-reviewed by Elisabeth Wilson author Venkat Srinivasan Venkat Srinivasan works as a consultant and advisor to a couple of start-up companies and banks in India. He has been operating so since 2015 and helps companies design their business and operational processes with appropriate system backbone. As part of these assignments he has also worked closely with one of the largest Ecommerce players in India in helping them to launch key product initiatives in the payments space and design of resilient business processes and back-office systems. Prior to embarking as an independent consultant Venkat worked with GE Capital and Citibank in India for 24 years in the Operations and Technology space. He has held senior level appointments in O&T and managed large scale transformation programs with cross functional teams in the consumer/digital banking domain. Venkat holds a Master’s degree from the International University of Japan with major in International Management and is a certified PRM.

68

Intelligent Risk - November 2023


Synopsis 2022 was a volatile year for energy markets which, in turn, fueled a realization of challenges with the industry’s development of models (including XVA), data handling and business models. The continued growth in energy markets and their volatile natures mean it remains important to advance practices and address these challenges. This article presents a summary of practical conversations by panelists at a Numerix webinar on these major themes.

three themes that characterize trading in the energy markets today

by James Jockle introduction In November 2022, Numerix sponsored a webinar where expert panelists addressed XVAs and other risk-related topics within the energy markets. XVA is a catch-all term that denotes the various risk-related and cost-related valuation adjustments that are applied to running a financial derivatives operation. In the energy markets, XVA is a very complicated and challenging concept—and is not widely practiced. However, as energy usage has increased in more recent years, so has the trading activity in this market. As a result, the application of XVAs in these markets is gaining momentum. This paper looks at three of the core themes that formed the panelists’ discussion and sums up their views of how energy markets are changing before our eyes. Webinar panelists included quant and risk practitioners representing Numerix, global investment banks and energy companies. 1. Model calibration and hedging for XVA teams have been challenging due to the turmoil in the energy markets. It is well understood that XVA calculations require stochastic models for the underlying risk factors under consideration and these models require calibration to market data, ideally to volatility surfaces. In energy markets, particularly when dealing with less liquid underlyings such as gas bases or regional power prices, it is common to find that the required volatility surfaces are unavailable or the data are very sparse. The markets for nonlinears over these underlyings are simply not as developed as they are for the major indices or benchmarks.

Intelligent Risk - November 2023

69


In lieu of volatility surfaces, models are often fit to historical data, but this approach can lack responsiveness to recent upticks in volatility, as witnessed throughout 2022. This, of course, feeds into XVA scenarios and the resultant XVA figures, as well as associated risk figures such as PFE (potential future exposure). In short, when we look at the market and observe clearly that there are elevated levels of volatility, we want that reflected in our scenarios and XVA figures, and fitting models to long periods of historical data can impede this. The panel discussed some strategies to overcome these difficulties. One such strategy is to identify which model parameters required stability and consistency within the entire historical sample period and those which could safely vary over the sample period (i.e., those which behaved more like processes themselves). Volatility is an obvious example of the latter (a process), while the parameters governing reversion and seasonality were examples of the former. Volatility could be allowed to vary according to, say, an EWMA (exponentially-weighted moving average) process during calibration, and the current volatility could be extracted from the recent history and projected forward over the risk horizon under consideration. Naturally, a dedicated stochastic volatility model would avoid these problems altogether, but it was the view of the panel that such models are not widely used in the XVA setting within energy markets. Hedging requirements have also been impacted by the recent instability in markets as participants seek to rebalance their hedges more frequently. Speed of XVA calculations is the critical factor here and the group discussed a number of techniques and technologies that are being employed. Some market participants are making use of high performance computing GPU technology and joint algorithmic differentiation to achieve a level of performance where you are enabled to calculate all the sensitivities you need—rapidly and accurately. 2. Today, multiple sets of data are observed and modeling is becoming more sophisticated. From a modeling perspective, energy market participants today look at many different types of data. Previously, traders would look at perhaps two or three different data sets and create and run their data models through Excel. What we are now seeing is people looking at multiple different sets of real-time data and their correlations to different parts of the energy markets. For example, if you are trading LNG (liquified natural gas), then you are not just looking at gas prices and gas data, but also power data and shipping transportation data. You’re looking at whether there will be logistical bottlenecks, whether power plants will switch to gas and any other interconnected data. Before, we rarely saw traders look at all these different sets of data. There is then the challenge, after acquiring all this data, of putting it into a centralized location and running sophisticated modeling off of it. So now we are seeing different modeling requirements and much more sophisticated analytical models being built. We are seeing AI models being built, we are seeing the use of machine learning. As a result, traders are now becoming sophisticated technologists who are running their own models, gathering their own data.

70

Intelligent Risk - November 2023


3. Over the last year, we’ve seen banks beginning to trade with small, lower-quality energy producing firms. Banks are looking at the future profit potential of trading with newer, smaller energy firms, which do not have much history in the market. As a result, we are seeing risk-adjusted rates of return measures being increasingly applied to these counterparties, which tend to be wind and solar producers, as well as battery manufacturers—areas that banks think could have a chance of growing over time. Nonetheless, during this market uncertainty there’s going to be a premium involved. Commodities are volatile and thus exposures are volatile, and so if a bank wants to deal with the smaller names, it’s probably going to charge a decent spread.

a growing market The Numerix webinar was considered to have taken place at a timely moment as the energy markets are constantly changing and have recently experienced extreme fluctuations in price and other dynamics, such as altered supply and demand, disrupted supply chains, cancelled pipelines, higher volatility, and other factors. However, the energy market is growing at a tremendous rate, and that means growth in trading opportunities as well.

author James Jockle As Executive Vice President and Chief Marketing Officer of Global Marketing & Corporate Communications, Mr. Jockle leads Numerix’s global marketing and corporate communications efforts, spanning a diverse set of solutions and audiences. He oversees integrated marketing communications to clients in the largest global financial markets and to the Numerix partner network through the company’s branding, electronic marketing, research, events, public relations, advertising and relationship marketing. Since joining Numerix in 2008, Mr. Jockle has launched the organization’s award-winning thought leadership program, bringing to light challenges and insights from Numerix market experts.

peer-reviewed by Carl Densem

Intelligent Risk - November 2023

71


Synopsis Regardless of the amount that organizations are willing to spend on cybersecurity, most retain some vulnerabilities to malicious attacks. This speaks to the need to approach mitigation measures differently between specific firms and tailor them to the organization in question. The author outlines the tradeoffs faced by organizations at this juncture and potential ways for risk managers to add firm value in the cybersecurity risk mitigation process. These value contributions include providing insights in a firm’s cybersecurity insurance purchase decision, as well as imparting specific, sensible measures for minimizing firm risk through several known pathways of attack.

organizational cybersecurity: do the basic things correctly

by Ted Belanoff introduction Organizational cybersecurity is an intricate and resource-intensive endeavor. Virtually all sizeable organizations allocate significant resources to their cybersecurity effort, a cost center, in order to prevent potential data losses and compromises. Paradoxically, despite these investments, global cybersecurity endeavors have proven inadequate at every level. Repeated compromises of hundreds of governmental and corporate systems, at all levels of purported internal cybersecurity, have occurred in the past several years. From the US’s National Nuclear Security Administration to ISC2 (International Information System Security Certification Consortium), the most prominent global cybersecurity credentialing body, no organization has been immune to data and security compromise. My goal in this article is not to paint a bleak picture of the current state of cybersecurity for the sake of pessimism. Instead, I advocate for a realistic perspective when approaching firmlevel organizational cybersecurity. While minimizing the likelihood of compromise is important, it is very unlikely, if not impossible, to completely eliminate threats. As opposed to prioritizing threat elimination at any cost, mitigation goals should instead remain grounded and evaluated through a P&L-based framework. The appropriate mitigation measures should be tailored to the organization’s size, the type of data secured, and its specific risks and technical structure. As per basic risk management principles, the firm’s operational cybersecurity strategy should aim to determine which cybersecurity risks can be accepted, which should be mitigated (by employee training/policies and enterprise security solutions), and which should be transferred (through available cybersecurity insurance coverages). 72

Intelligent Risk - November 2023


cybersecurity insurance With regard to transfer of cybersecurity risk, a functional split occurs between two organizational approaches: purchasing cybersecurity insurance or not purchasing cybersecurity insurance. Cybersecurity insurance, while costly, does provide organizational value. Notably, it provides indemnity payments in the case of compromise, effectively transferring risk to a third party at the price of ongoing premiums and smoothing periodic cybersecurity P&L. Furthermore, it aligns the incentives of insurance companies with those of the organization, potentially reducing the organization’s risk through the enterprise guidance and expertise that an insurance company may preventatively provide. Whether or not the insurance route is worthwhile depends on premium pricing compared to assessed risk exposure and the operational simplicity and peace of mind cybersecurity insurance ideally creates. A firm’s risk manager has an excellent opportunity to create organizational value by gauging firm risk tolerances and providing insights into cybersecurity insurance purchase decisions. Ultimately, cybersecurity insurance purchase decisions should be viewed in the same manner as any other firm purchase of nonmandated insurance. Actuarially speaking, all correctly-priced cybersecurity insurance policies should provide a negative expected value (on average) while shielding against catastrophic loss. The risk manager adds value by engaging stakeholders and senior management, aligning the firm’s atmosphere against this high-level risk management consideration. Moreover, the risk manager should also provide insights into potential costs related to breaches versus insurance expenses within the context of available insurance policies and current cybersecurity challenges. A current insurance purchase P&L consideration may include evolving regulatory landscape (such as with the SEC’s new 10-K cybersecurity disclosure requirement reform in July 2023, which outlines new specific compromise and disclosure penalties). Another current purchase focus should be on insurance policy stipulations, particularly those relating to state-sponsored attacks, which can vary in likelihood and P&L impact based on the organization’s industry positioning and the geopolitical environment.

cybersecurity trade-offs With the previous points outlined and a focus on cybersecurity operational P&L, the most valuable contribution to an organization’s internal cybersecurity policy lie in the measures that provide the highest marginal preventative benefit at the lowest marginal cost. Despite the prominence and promotion of expensive cybersecurity solutions by the cybersecurity industry, I strongly believe that, from a P&L perspective, the priority should be focused on basic employee training and management of the simplest errors of the technically unsophisticated on the user side. Recent history (including the recent MGM attack) indicates that many large-scale and most disruptive cybersecurity compromises are not due to a firm’s lack of technical sophistication. For average organizations, technically sophisticated zero-day attacks are not most threatening. Instead, the basic threats of social engineering (phishing/vishing), password reuse, and incorrect file/system permissionings are an order of magnitude more likely to be damaging.. Thankfully, these are risks that can be substantially mitigated by strong internal cybersecurity control practices that are not particularly resource-intensive. Intelligent Risk - November 2023

73


doing the basics right Perform System Backups Frequently Ransomware is one of the most common and disruptive cybersecurity compromises. Ransomware encrypts an organization’s files and extorts payment from a victim for document de-encryption and retrieval. Frequent system backups sidestep ransomware by allowing restoration to a recent save point. Reductively, if an attacker takes ransom of a primary system on Wednesday and a complete system backup was performed on Tuesday, an organization can restore to Tuesday’s backup, not pay the ransom, and substantially mitigate organizational data loss. The cost of regular system backups is generally lower than potential ransom payments..

Implement Safe Device Policies USB-A connected devices tend to be trusted by default, yet they can pose a potential compromise vector. Many devices contain executables to assist with their initial installation. These executables are generally benign if the hardware is provided by a legitimate vendor, but the default trust allows for a potential compromise vector. Therefore, all employee devices should be approved prior to introduction to any enterprise systems. Adding a training clause that mandates IT approval for all employee devices is a lowcost measure that helps prevent the introduction of problematic devices, including malware and malicious drives (for example, USB kill drives). . Additionally, the US FBI has recently drawn attention to the potential of data compromise through public use USB charge ports. This is not a new issue. It stems from the use of USB ports as a charging utility when their intended use is data transfer. Mitigation of this issue can be easily executed by the distribution of USB data blockers at the enterprise level. The use of USB data blockers at the enterprise level can mitigate data compromise risks through public-use USB charge ports, promoting cybersecurity awareness throughout the organization.

Conclusion The cybersecurity landscape is dynamic with emerging risks and mitigation measures constantly evolving. The ideal organizational cybersecurity avoids both alarmism and negligence, and aims to minimize irrational actions at either extreme. The pragmatic and rational risk manager aims to calmly inform management about the cybersecurity landscape, align organizational cybersecurity risk tolerances with management appetites, and identify cost-effective methods to substantially reduce cybersecurity risk. In many cases, the low-cost, high-benefit mitigation measures involve employee training and minimizing user behavioral risk.

74

Intelligent Risk - November 2023


references Turton, W., Riley, M. and Jacobs, J. (2020, December 17). “U.S. Nuclear Weapons Agency Hacked as Part of Massive CyberAttack.” Time. https://time.com/5922897/us-nuclear-weapons-energy-hacked/ Mazor, C., Herrygers, S. and Danola, C. (2023, July 30). “SEC Issues New Requirements for Cybersecurity Disclosures.” Deloitte. https://dart.deloitte.com/USDART/home/publications/deloitte/heads-up/2023/sec-rule-cyber-disclosures Zhang, D. (2023, May 22). “Cyber Insurance Market in Turmoil Over State-Backed Attacks.” Bloomberg Law News. https:// news.bloomberglaw.com/insurance/cyber-insurance-market-in-turmoil-over-state-backed-attacks Morrison, S. (2023, Sep 23). “The Chaotic and Cinematic MGM Casino Casino Hack, Explained” Vox. https://www.vox.com/ technology/2023/9/15/23875113/mgm-hack-casino-vishing-cybersecurity-ransomware McLaughlin, J. (2023, June 25). “Cyberattacks on hospitals ‘should be considered a regional disaster,’ researchers find.” NPR. https://www.npr.org/2023/06/25/1184025963/cyberattacks-hospitals-ransomware Higgins, E. (2023, April 13). “FBI warning against using public charging ports generates buzz.” IT Brew. https://www.itbrew. com/stories/2023/04/13/fbi-warning-against-using-public-charging-ports-generates-buzz

author Ted Belanoff, ISSEP, PRM, CAIA, EA, CIA Ted is an entrepreneur and student. Over the past ten years, he has worked a number of diverse roles in the insurance, fintech, and hedge fund industries. He has received technical certifications corresponding to every specified role classification in the U.S. Department of Defense’s DOD 8140 cybersecurity framework. His hobbies include biking and playing chess. Depending on the time of year, he lives in New York City, Northern California, or Northern Maine.

peer-reviewed by Peter Ding

Intelligent Risk - November 2023

75


INTELLIGENT RISK knowledge for the PRMIA community ©2023 - All Rights Reserved Professional Risk Managers’ International Association


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.