Mark Norman Head of Content CeFPro mark.norman@cefpro.com
Sales & Advertising
Chris Simou Head of Sales CeFPro chris.simou@cefpro.com
Design
Natasha Marino Head of Design CeFPro natasha@cefpro.com
A warm welcome to the July edition of Connect Magazine.
You have only to read CeFPro’s 2025 Fintech Leaders report to know that AI has swiftly gone from a futuristic concept to a transformative force that poses tangible challenges for all of us.
The benefits are undeniably substantial – real-time analysis of vast datasets, improved predictive analytics and operational efficiency.
But AI also presents significant challenges, most notably around data privacy, algorithmic bias, and the explainability and audit of AI decisions.
The European Union’s AI Act, set to be implemented in stages by 2026, aims to address these issues by establishing a regulatory framework that ensures AI systems are safe and respect fundamental rights.
This edition of Connect Magazine explores the dual nature of AI in financial services – its capacity to drive innovation and efficiency, and the imperative to responsibly manage its associated risks around ethics, privacy and governance – and offers a deep dive into some of the issues that dominate the most talked about emerging tech in a generation.
How might AI reshape our industry? What steps are necessary to harness its potential? And how might we build resources to safeguard against its pitfalls?
We don’t profess to have the answers to those questions, but the insight you’ll read here will certainly inform the debate that will help us find them.
We hope you enjoy this edition of Connect Magazine. If you would like to guest edit a future issue, or you think your organization might benefit from the advertising opportunities CeFPro offers, please get in touch (contact details are overleaf).
The next edition of Connect Magazine will be out on August 25.
WEAPONISED BY CODE:
Why AI-Driven Fraud Is Outpacing Law Enforcement
Ellie is Growth Marketing Manager at CeFPro
Artificial intelligence has introduced a seismic shift in the world of financial crime – not only in how it is committed, but in how institutions and law enforcement respond.
What was once the domain of identity theft and rudimentary scams has evolved into a landscape of deepfakes, voice cloning, synthetic identities, and AI-generated fraud schemes capable of disrupting entire communities.
The acceleration of this threat has left law enforcement agencies, in many
instances, outpaced and underresourced.
From an enforcement perspective, AI presents both an unprecedented opportunity and a pressing challenge. It has become clear that to disrupt the sophisticated methods used by today’s fraudsters, investigators must think like them.
At CeFPro’s recent flagship Risk Americas conference, delegates heard from a range of senior risk professionals about the challenges that law enforcement faces in
keeping pace with ever more sophisticated fraudsters.
One board-level member of a leading US-based bank told the conference that criminal actors are leveraging AI to simulate authority, replicate voices, manipulate data, and create fictitious companies with convincing precision.
In turn, investigators must adopt a cybercriminal’s mindset to anticipate how these tools might be exploited.
The difficulty, the risk manager argued, lies not only in tracking the
technology, but in educating victims – and even colleagues – on its risks.
A common misconception persists that AI is infallible or inherently secure. In reality, misplaced trust in AI-generated communications or transactions is fueling a rise in reputational and financial harm.
The speaker argued that one of the more significant issues risk managers face is that the broader public is still unfamiliar with the mechanics of AI-driven scams, making it harder to contain the damage or trace its source.
Each case becomes a race against time, further complicated by fragmented jurisdictional authority and varying levels of technological readiness among enforcement bodies.
The manager outlined one particular case – a 16-year-old using voice cloning technology to impersonate a banking executive and trigger a citywide emergency – that illustrated the urgency of this issue.
It wasn’t just a prank – it was a demonstration of how easily AI tools can be weaponized to sow chaos.
In this instance, a multi-agency task force had to collaborate with financial institutions to understand the method of attack and identify the perpetrator.
The incident made one thing painfully clear: law enforcement cannot tackle this alone. The future of financial crime prevention hinges on stronger public-private partnerships, where banks, regulators, technology providers, and law enforcement agencies operate in lockstep.
While agencies are beginning to receive training on AI-related threats, the pace is too slow.
Much of the knowledge transfer is reactive rather than proactive,
leaving investigators learning on the job while cases grow in complexity.
The private sector often has the resources to hire top data scientists and develop cutting-edge models. Government agencies, by contrast, must rely on limited budgets and outdated frameworks that are illsuited for this digital arms race.
To build resilience, what’s needed is more than just investment in technology. It is a coordinated effort to foster real-time information sharing, joint task forces, and sustained training that keeps pace with evolving threats.
Regulators are starting to dig deeper, examining not just the results of AI systems, but the training data that underpins them.
Biased or insufficient data can render even the most sophisticated models unreliable, undermining the very efforts they aim to support.
Institutions must be prepared to demonstrate that their models are fair, explainable, and effective – not just functional.
In the coming years, as AI becomes more embedded in financial systems,
There remains a serious gap between the private sector’s rapid adoption of AI tools and the public sector’s ability to understand and regulate them.
the stakes will only rise. The tools used to commit financial crime will become more convincing and harder to trace.
Law enforcement’s ability to respond must evolve in parallel. What’s at stake is not just the integrity of financial systems, but public trust in the institutions charged with protecting them.
The race is on – not only to adopt AI, but to do so responsibly, ethically, and collaboratively.
The Pressure’s On. The Stakes Are High. Let’s Get Smarter.
Sept 23-24, 2025 | London
Sanctions are evolving. Criminal networks are innovating. Regulators aren’t waiting. Join the sharpest minds in AML and sanctions this September as we cut through the noise and get real about risk.
THE DEEPFAKE DECEPTION:
Why Financial Fraud’s New Frontier Demands an Old-School Defense
Dalit Stern is the Managing Director, Enterprise Senior Fraud Risk Officer, Risk & Compliance with TIAA in New York. Previously, she ran her own consulting business and was a partner at PriceWaterhouse Coopers. The views expressed in the following article are her own are my own views do not necessarily reflect TIAA’s positions.
The room may have been drowsy after lunch, but the conversation at the keynote afternoon session at CeFPro’s flagship Risk Americas conference was anything but.
As the panel shifted to the rising threat of deepfakes, it became clear we are standing at the edge of a new frontier in fraud. This isn’t science fiction anymore.
Deepfakes have arrived – and they’re walking through the front door of financial institutions. Deepfakes have arrived – and they’re walking through the front door of financial institutions.
We’re not just talking about doctored videos of celebrities, talking babies or other social media stunts. In our industry, deepfakes take the form of synthetic voices, fake video interviews, and AI-crafted identity documents.
Fraudsters can convincingly impersonate job applicants, executives’ direction, or phone calls from customer support staff. The targets are the same – your identity, your data, your money, your access – but the means are far more sophisticated.
This evolution isn’t driven by lone hackers. Behind many of these attacks are organized crime groups and, increasingly, state actors.
One case discussed on the panel involved North Korean IT workers infiltrating U.S. companies using deepfaked interviews and remote
access tools. IT employees were hired, onboarded, and entrusted with internal systems without ever physically entering a building. The perimeter didn’t fail. Trust did.
These aren’t theoretical threats –they’re active attack vectors. Still, the increasing use of deepfakes isn’t rewriting the purpose of fraud.
It just offering new ways to exploit the same old vulnerabilities. That’s the paradox: what we’re facing is both cutting edge high-tech and timeless.
In short, the tactics have changed, but the goals haven’t. That means our response doesn’t require starting from scratch – but it does demand evolution.
Our first task is to reassess where and how these deepfakes can compromise existing controls and processes. That includes customer onboarding, employee hiring, vendor interactions, and executive-level communications.
Then, we continue to fight deepfake fraud while acknowledging that even the most advanced detection systems are more effective when paired with human oversight and interdepartmental coordination. Fraud prevention has long relied on a layered system of controls which includes device ID, behavioural analytics, and document verification. Similarly, addressing fraud and AIgenerated deepfakes cannot rely on a single tool or feature.
A powerful concept that came up during the session was the “fusion model”
Companies should continue to take a holistic approach: Instead of handing the risk off to the cyber team or the fraud team alone, we need crossfunctional ownership. Fraud, Risk, HR, investigations, compliance, and frontline staff must work together.
We also explored the promise – and limits – of adding ‘liveness detection to our existing risk signals.
This technology detects whether a face in a video or an image, or a voice is real, using subtle signals like blood flow or environmental interaction.
Our controls are improving fast, but are not bulletproof.
There’s also a broader cultural challenge. We trust what we see and hear on our screens. In the post-pandemic era, we become comfortable with onboarding of employees and transacting with customers remotely and digitally.
Deepfakes have an interesting impact on biometrics as a fraud control. According to Gartner, nearly onethird of enterprises will abandon standalone biometric authentication by 2026.
And let’s be honest, the human element remains our most vulnerable point.
Like other fraud schemes, deepfakedriven fraud thrives on urgency. The voice claiming to be your boss needs you to act fast. The ‘customer’ demands their funs now, and require empathy. Social engineering has always played on emotion –
deepfakes just add realism.
That’s why training isn’t optional anymore. Everyone from your employees to your clients must learn to pause, ask, and verify.
We must help our customers, especially the elderly and those who are less tech-savvy, understand that not all digital interactions are authentic. It’s a difficult balance: maintain confidence in our services while encouraging a healthy level of doubt.
Technology can help, but only when governed wisely. Cloud-based fraud tools and real-time analytics offer much-needed agility.
But if our internal controls, governance frameworks, and regulatory awareness don’t keep pace, the tools won’t be enough. Fraud prevention continuously has to be pre-emptive, collaborative, and relentless.
Deepfakes are serious. But so are we. If we remain agile, aware, and aligned, we can keep fraudsters in check – even those hiding behind a digital mask.
Deepfakes explained
Deepfakes are artificially created images, videos, and audio designed to imitate human characteristics. Deepfakes use artificial intelligence (AI) called deep learning and may take different forms including a face-swap video, which transposes one person's facial movements onto someone else's features, or voice cloning, which copies a person's vocal patterns in order to digitally recreate and alter their speech.
This September, the heart of London becomes the nexus for global financial crime experts as Financial Crime Europe 2025: AML & Sanctions Edition returns to ETC Venues, Eastcheap. With a lineup of visionary speakers, 20+ high-impact sessions, and a sharp focus on innovation, regulation, and cross-functional collaboration, this event promises insight and transformation.
From redefining governance frameworks to unlocking the full potential of AML and sanctions team integration, our standout speakers will take the stage to share battle-tested strategies and forward-looking solutions. Whether you’re on the frontlines of compliance or leading strategic risk decisions, this is where the conversation evolves, and where tomorrow’s playbook takes shape.
Financial Crime Europe: AML
& Sanctions Edition
Stand Out Speakers
Shereen George Head of Sanctions
BNP Paribas
Shereen George began her career as a trader before pursuing her passion for geopolitical issues, transitioning into the field of compliance. Over the past fifteen years, she has specialized in sanctions compliance, working across several leading investment banks. Shereen’s experience spans the full spectrum of financial crime topics, with a particular focus on managing comprehensive sanctions compliance frameworks. This includes developing policies, overseeing systems and controls, providing complex advisory support, conducting risk assessments, managing live trade transactions, and delivering training programs.
Saurav Banerjee
Head of Risk Intelligence UBS
Saurav Banerjee is a seasoned professional specializing in risk intelligence, strategy transformation and geopolitical risk management. Saurav currently holds the role of Global Head of Risk Intelligence at UBS. He leads a global team of data analysts and investigators to uncover and manage risks driven by global megatrends.
In his previous roles at UBS, Saurav has successfully built and led teams focused on digital assets compliance and geopolitical risk. He set up UBS’s first comprehensive risk framework for digital assets, led strategic initiatives at board level, and drove global transformation efforts across financial crime and emerging risk intelligence functions.
Gareth Dothie
Head of Domestic
Corruption Investigations
City of London Police
Gareth Dothie is Head of Investigations and Intelligence for the Domestic Corruption Unit, a new Home Office funded department he helped found within the City of London Pollice. He is building national capabilities and inter-agency coordination to tackle bribery and corruption and wider economic crime across the UK.
Svetlana Zarubina-Thomas
Head of Sanctions Compliance Nordea
Svetlana Zarubina-Thomas is Head of Sanctions Compliance Nordea. Prior to joining Nordea Svetlana headed the EMEA Sanctions Compliance function at UBS and worked at Deutsche Bank in their UK&I Sanctions Team. Previously,Svetlana held a number of financial crime compliance roles within global financial institutions.
Gareth is an experienced fraud and financial crime professional who has held a number of national lead roles, including Head of Fraud Operations. Fraud Operations focusses on nationally significant and high-harm fraud and money laundering investigations and works with partners at home and overseas to protect the UK from economic harm.
Neil Giles
President
STOP THE TRAFFIK
Neil Giles began his journey with STOP THE TRAFFIK in 2007 after a chance meeting that sparked his dedication to countering human trafficking. Since then, he has become a leading figure in the use of intelligence to disrupt traffickers’ business models. The intelligence products he has Helped to develop are now actively driving prevention efforts and law enforcement investigations worldwide. In 2017, he co-founded the Traffik Analysis Hub to further these efforts.
With an extensive background in law enforcement, Neil has served with New Scotland Yard, Regional and National Crime Squads, the National Criminal Intelligence Service, and the Serious Organised Crime Agency (SOCA). He has led major international operations against organised crime, including in his role as the UK Law Enforcement Attached to North America, based at the British Embassy in Washington, D.C.
AI IS LEARNING TO DISCRIMINATE
… and Financial Institutions Are Letting It Happen
Shawn Tumanov has been a Data, Privacy, Model, AI and Governance Executive at GEICO since May 2024. Prior to this he spent nine years as Director of Data & Analytics (AI) Governance at BMO Financial Group in Toronto.
In the rush to integrate artificial intelligence into core financial processes, a troubling truth is emerging around the ethics of technology and the ability of risk teams to effectively manage it and keep it honest in a compliance context.
The problem is that by nature, AI systems are not inherently ethical. That, in itself, is old news – something we’ve known for a while now. Where it gets tricky is in what the compliance, regulatory and – by extension – reputational implications of that truth are.
And ultimately that comes down to how well-equipped financial services are to implement, and then, crucially, monitor and audit the technology to ensure it’s operating within key regulatory and ethical parameters.
The industry is often too slow, too fragmented, or too commercially driven to stop biased decision-making before it causes real harm.
At the heart of this issue is not just technology, but governance. According to Shawn Tumanov, Data, Privacy, Model, AI and Governance Executive at GEICO, the most pressing AI risks facing financial institutions are not technical – they’re ethical, and the clock is ticking.
“Models are not really programmed to be biased,” said Tumanov, speaking at CeFPro’s recent Advanced Model Risk conference in New York. “But models learn from data. If we get data that’s biased, it’s going to perpetuate those stereotypes or that bias.”
He points to credit reporting history as a prime example of this problem, where structural inequalities have created historical data sets that favour certain ethnicities over others. When these data sets are used to build AI systems without critical review, the outcome isn’t innovation, it’s discrimination – and discrimination at scale.
For Tumanov, the ethical failings begin far earlier than many realise.
“From the beginning, we need to think through what is the information and data that we’re using, and how are we updating that data?” he said. “Do we have permission to use the data? Was it collected for the purpose we’re now applying it to?”
It’s a foundational question that most financial institutions still haven’t answered consistently or transparently. Using data collected for one purpose – say, mortgage underwriting – to fuel another, like marketing, risks not just customer trust but also regulatory action.
Regulatory expectations are already catching up. Tumanov points to developments like the California Consumer Privacy Act and recent efforts by some U.S. states to require explainability for insurance decisions.
“Pretty soon, there may be occasions or specific use cases where we have to provide a very detailed explanation as to the reason for declines or adverse actions,” he said.
This will radically reshape how AI is developed and deployed in financial services-from data management to model design to frontline application by customer-facing staff.
The stakes are especially high for models that affect people’s lives – loan approvals, job applications, insurance pricing.
“If you’re developing a model that scores applicants for jobs, you probably need to have more explainability and transparency compared to a model that’s predicting when a server is going to go down,” said Tumanov. The former requires defensibility and clarity; the latter, less so.
But the real dilemma, he says, emerges once bias is detected. “If you identify a model as biased, what do you do with that? Do you go back and recreate the entire model? Do you remove a certain feature?”
These are not just technical decisions; they’re financial ones. A biased model discovered late in the development
cycle could cost millions to fix or delay. That’s why, he emphasised, governance must be embedded from the outset.
Third-party tools and open-source packages exist to test for fairness, Tumanov noted, but adoption is patchy, and understanding statistical bias versus practical bias is a nuanced challenge many organisations have yet to grasp. What’s needed is a more consistent, collaborative approach to model governance across the industry.
“The better we’re all aligned as an industry, the better conversation we’re able to have with our stakeholders,” he said. “No one competes on model risk management practices.”
AI may be the future of financial services, but if left unchecked, it risks replicating and scaling the very inequalities the industry should be working to solve. Tumanov’s warning is simple: ethics, not efficiency, must come first.
NEWS WHAT'S BEEN HAPPENING...
Round up of news stories in July
Risk & Finance in Focus: Latest Headlines
FCA Slams Monzo with £21m Fine Over ‘Fake Addresses, Real Failures’ Scandal
Monzo has been fined £21 million by the Financial Conduct Authority after thousands of fraudulent accounts were opened using fake or high-profile UK addresses, including Buckingham Palace. The FCA said Monzo’s financial crime controls failed to keep up with its explosive growth, exposing serious compliance flaws. While the digital bank claims to have since reformed, regulators warn the wider fintech sector remains vulnerable. View here >
Why Institutional Investors Are Rethinking Data and Liquidity Strategy
Institutional investors are facing mounting challenges as private and public assets converge in increasingly complex portfolios. According to a new report from J.P. Morgan, modernizing data management and liquidity tools may be the key to sustaining performance, managing risk, and ensuring compliance in volatile markets.
View here >
Santander Cements Top 4 Place Among UK’s
Mortgage
Lenders with £2.65bn TSB buy-out
Santander has agreed to acquire TSB for £2.65 billion, a move that will position it as the UK’s fourth-largest mortgage lender. The deal expands Santander’s customer base and deposit pool, while allowing Banco Sabadell to shore up funds amid pressure from rival BBVA. The acquisition, subject to shareholder approval, is expected to complete in early 2026.
View here >
Texas Bank Collapse Sparks Second U.S. Failure of 2025
Santa Anna National Bank in Texas has become the second U.S. bank to fail in 2025 amid suspected fraud and asset shortfalls. The FDIC stepped in after regulators found the bank in unsafe condition. Its insured deposits have been assumed by Coleman County State Bank. The failure is expected to cost the Deposit Insurance Fund $23.7 million.
View here >
Operational Resilience the New Battleground for Trust in Finance
In an era where even minutes of downtime can cost millions, operational resilience has become a strategic imperative, not just a compliance exercise. Writing in Mortgage Finance Gazette, Warren Higgins, CIO at Phoebus Software, argues that institutions must go beyond disaster recovery and embed resilience by design to earn and keep trust in a high-stakes, always-on financial world.
View here >
(Hint: It’s Not the Customers)
Discover what’s leading the charge > What’s Driving Tech Investment?
BREAKING THE MOULD:
Why Global MRM Needs a Culture Shift
Rita Gnutti is currently the Head of Internal Validation and Controls in the Head Office of Italian banking group Intesa Sanpaolo, where she has spent the last 18 years. Prior to that, she was a lecturer at the Certificate of Bank Treasury Risk Management (BTRM)
Model Risk Management (MRM) may have once been the quiet engine room of a bank’s risk framework, but with artificial intelligence now embedded across financial services, it’s rapidly becoming the focal point of strategic, technological and regulatory friction.
The reality of these dynamics was brought into sharp relief at CeFPro’s recent Model Risk Management conference in the U.S., exposing not only the growing influence of AI but also the widening cultural and regulatory divide that exists between Europe and the U.S.
From accelerating model validation to reshaping risk inventories, AI is forcing financial institutions to rethink the role and design of MRM. But for European banks, the path forward is complicated by
regulatory intensity, legacy mindsets, and structural silos that some argue risks turning MRM into a constraint rather than a catalyst.
Following the event, Rita Gnutti, Head of Internal Validation and Controls at Intesa Sanpaolo, summed up this cultural contrast, saying: “In the U.S., flexibility is very important. MRM is expected to be integrated with the business. It’s seen as an opportunity, not a constraint. In Europe, the function was born out of regulatory compliance. While that’s evolving, the shift is not complete.”
That evolution is being spurred by necessity. As banks expand their use of AI tools – many of which fall outside the traditional definition of a model – MRM inventories are being stretched to accommodate a broader range of algorithmic systems.
“We’re already seeing the inventory used as a repository for AI solutions, even when they’re not technically ‘models’,” Gnutti said. “It’s becoming more difficult to draw a clear line, and this is pushing institutions to rethink their governance systems.”
In Europe, that rethink must occur within a far more rigid regulatory landscape. Gnutti points to the Fundamental Review of the Trading Book (FRTB) as just one example where EU regulations diverge significantly from U.S. norms.
“There are different treatments for the same types of risk, like sovereign exposure under the internal model approach,” she said. “This creates an unbalanced playing field and adds a layer of complexity for European banks that’s not as pronounced elsewhere.”
We need clear frameworks that reflect the complexity of AI systems. Traditional techniques are not always fit for purpose.”
And in recent months, that complexity has only deepened further. In February 2025, the European Commission issued new guidelines clarifying the definition of an AI system under the AI Act (Regulation EU 2024/1689), bringing financial institutions a step closer to binding obligations around transparency, explainability and operational resilience.
For MRM leaders, this presents both an opportunity and a burden. “It is essential now to pursue a multifunctional approach,” said Gnutti. “MRM, the Data Office, Compliance, and model users all
have to work together. The risks from AI aren’t siloed, so our responses can’t be either.”
That said, the AI revolution also brings potential rewards. Gnutti sees GenAI as a tool that can “accelerate efficiency” in model validation – but only if validation methodologies are upgraded to match.
“The industry is moving in that direction, but we need clear frameworks that reflect the complexity of AI systems,” she said. “Traditional techniques are not always fit for purpose.”
One of the most valuable takeaways from the conference, she noted, was the emphasis on right-sizing governance to organizational risk profiles.
“A risk-based approach makes sense,” she said. “It means tailoring controls to the impact and complexity
of each AI solution, rather than applying a one-size-fits-all policy.”
That kind of nuance may only come through global cooperation. Gnutti believes the MRM community must deepen its exchange of practical experience.
“Theory is important, but what really matters is what works. As the use of models increases, MRM should be a resource – not a gatekeeper – so we can amplify the benefits of these technologies while managing their risks.”
AI may be rewriting the rules, but it seems that model risk managers have the chance to write the next chapter. The challenge in doing that successfully lies in breaking free from compliance-driven legacies and building governance that is agile, inclusive, and above all, human.
5 AI Challenges CROs Must Solve in the Next 6 Months
When it comes to the debate around AI implementation in financial services risk management, much of the talk is about the resource benefits that emerging technology can bring to the table. But that’s just one side of the coin.
On the other is the slightly thornier issue of how AI can be configured to meet escalating compliance and governance demands.
Here are the critical risks that, based on current data, financial services leaders must address right now.
CRO Takeaway:
Compliance is not just a box to check. It’s a moving target shaped by data, ethics, and emerging threats. Solving these five challenges is your playbook for responsible, future-proof AI deployment.
Regulatory Compliance and Legal Uncertainty
Why it matters: The rules are changing fast. Keeping pace with the AI Act, GDPR, and emerging U.S. frameworks is essential for avoiding fines and delays.
Key Stat: Financial firms cite regulatory ambiguity as a top barrier to deploying new AI models. — Debevoise & Plimpton, 2025
Data Quality and Governance
Why it matters: AI is only as good as the data it learns from. Inaccurate, inconsistent, or poorly governed data leads to flawed outputs, audit issues, and reputational risk.
Key Stat: Poor data governance is a leading cause of model failure in financial services AI deployments. — Crowe LLP, 2024
Key Stat: Model Transparency and Explainability of risk professionals say a lack of explainability limits their ability to govern AI systems. — ISACA, 2024 61%
Why it matters: Black-box AI won’t cut it with regulators. CROs must ensure that AI decisions can be clearly explained and audited.
Why it matters: AI systems are vulnerable to sophisticated cyber threats, from data poisoning to adversarial attacks. Traditional defences aren’t enough. Cybersecurity in the Age of AI
Key Stat: U.S. regulators are urging firms to apply AI-specific cyber controls amid rising threat concerns. — Debevoise & Plimpton, 2025
Ethics and Bias Mitigation
Why it matters: Unchecked AI can entrench discrimination. CROs must ensure fairness, transparency, and alignment with societal values.
Key Stat: AI tools in finance show measurable bias across race, gender, and socioeconomic status. — Nature HSS Communications, 2025
Operational Risk & Technology Europe
October 14-15, 2025, 2025 | London
The Return of OPRISK: Back for 2025
Reuniting the risk and control community with sharper focus and future-ready tools.
Explore the agenda >
TRENDWATCH:
THE SILENT WAR: 01
Why Operational Risk and Cybersecurity Will Decide the Future of Financial Services
Now, more than ever before, the financial services industry is facing an increasingly complex operational risk and cybersecurity landscape.
The convergence of advanced technologies, evolving regulatory frameworks, and sophisticated threat actors means a proactive and integrated approach to risk management isn’t just a nice-to-have, but a can’t-do-without.
In this feature, we examine the five critical headline trends shaping operational risk and cybersecurity in the financial sector, highlighting both the challenges and opportunities they present.
AI-DRIVEN CYBER THREATS
While AI enhances threat detection and response capabilities, it also empowers cybercriminals to develop more sophisticated attacks. According to a report by Business Insider, 80% of bank cybersecurity executives feel they cannot keep up with AIpowered cybercriminals.
These adversaries utilize AI to create realistic phishing scams and automate attacks, increasing their effectiveness and scale. Financial institutions must invest in advanced AI-driven security tools and continuous staff training to counter these evolving threats.
In addition, regulatory constraints often limit the speed at which banks can implement new AI defenses, necessitating a balance between innovation and compliance..
Takeaway: As AI continues to develop, staying ahead of malicious actors will require proactive strategies and industry collaboration.
02 RANSOMWARE RESILIENCE
Ransomware attacks remain a significant concern for financial institutions, with incidents becoming more frequent and costly.
A Picus Blue Report last year highlighted BlackByte ransomware as being among the most challenging to tackle, with only a 17% prevention rate.
These attacks not only disrupt operations but also pose severe reputational and financial risks. To enhance resilience, organizations must implement comprehensive backup strategies, regular data encryption, and robust endpoint protection.
Developing and testing incident response plans are crucial for minimizing downtime and financial losses. Additionally, employee awareness programs can help prevent ransomware infections by educating staff on recognizing and avoiding phishing attempts.
Takeaway: As ransomware tactics evolve, continuous assessment and adaptation of security measures are essential to protect assets and maintain customer trust.
03
SUPPLY CHAIN VULNERABILITIES
The increasing reliance on thirdparty vendors and complex supply chains introduces significant cybersecurity risks for financial institutions. Supply chain attacks, where cybercriminals target less secure elements within the supply network, have become more prevalent.
A notable example is the 2024 PowerSchool hack, which exploited vulnerabilities in thirdparty software.
To mitigate these risks, organizations must conduct thorough due diligence on vendors, enforce stringent security requirements, and maintain visibility into the security practices of all supply chain partners.
Implementing robust third-party risk management programs and continuous monitoring can help identify and address potential vulnerabilities.
04 REGULATORY COMPLIANCE AND DIGITAL RESILIENCE
Regulatory bodies are intensifying their focus on cybersecurity and operational resilience within the financial sector.
In the European Union, the Digital Operational Resilience Act (DORA) mandates that financial entities ensure they can withstand and recover from all types of ICTrelated disruptions and threats.
This regulation, effective from January 2025, aims to harmonize national regulations and strengthen the financial market against cyber risks. Similarly, the UK’s proposed Cyber Security and Resilience Bill seeks to update existing regulations to enhance cyber defenses and resilience.
Financial institutions must adapt to these evolving regulatory landscapes by investing in robust cybersecurity frameworks, conducting regular risk assessments, and ensuring compliance with new standards.
05 QUANTUM COMPUTING THREATS
Quantum computing poses a looming threat to current cryptographic standards used in financial services.
As quantum computers advance, they could potentially break widely used encryption methods, compromising the confidentiality and integrity of financial data.
A study by the ISB Institute of Data Science indicates that the banking, financial services, and insurance (BFSI) sector is inadequately prepared for these emerging threats, with a low average readiness score in post-quantum cryptography.
To address this, institutions must begin transitioning to quantumresistant cryptographic algorithms, as recommended by the National Institute of Standards and Technology (NIST). Implementing post-quantum cryptography (PQC) standards and developing migration strategies are essential steps toward safeguarding data against future quantum-enabled cyberattacks.
Takeaway: As supply chains become more interconnected, proactive approaches to safeguarding them will be critical in ensuring overall cybersecurity resilience.
Takeaway: Dynamic engagement with regulators and industry peers will facilitate smoother transitions and bolster overall digital resilience.
Takeaway: Early adoption and investment in quantum-safe technologies will be critical in maintaining trust and security in the financial sector.
By embracing innovation, enhancing operational resilience, and fostering a culture of continuous improvement, institutions can effectively manage operational and cybersecurity risks, ensuring the stability and integrity of the financial system.
WHY YOUR THIRD-PARTY RISK STRATEGY
SHOULDN’T BE KEPT UNDER WRAPS
Kelly Lake is a Third Party Risk Manager at Legal & General. She has previously held senior risk and technical management roles at Benchmark Capital and Fusion Wealth
In a cost-conscious environment, demonstrating the value of a Third Party Risk Management (TPRM) program can feel like nailing smoke to the wall, but articulating the ROI of your program is fundamental to protect the function during leadership and budget changes, and help secure strategic influence in decision making.
This article - the first in a three-part series - explores why demonstrating value is a strategic imperative, how to frame it in terms that resonate with stakeholders, and practical ways to make its impact measurable and visible.
Shine a light on what’s not happening
When success is marked by the absence of incidents, an effective TPRM program can be its own worst enemy. At its most effective it can look like nothing is happening.
While the benefits – like operational resilience, regulatory compliance, and reduced exposure to cyber threats and geopolitical events – are undeniable, they’re often hard to quantify, and even the most critical functions can be overlooked if their impact isn’t visible.
This makes TPRM vulnerable to being misunderstood, under-resourced, or deprioritised.
To counter this, TPRM leaders must be creative.
Track the frequency of vendor related incidents, breaches, and compliance issues over time, and capture emerging and developing trends as the program matures to show the impact your program is having on the frequency and severity of risk events.
Showcase the effectiveness of your interventions – like remediation plans, improved risk assessment at
onboarding, or continuous monitoring
– by developing risk scoring models to establish a baseline and show improvement over time.
This visual storytelling makes the impact of your program tangible and helps you link improvements to actions to support decision-making, and starts to turn risk reduction into cost avoidance.
Put a price tag on risk avoidance
Once you have established your metrics, start turning that data into a tangible illustration of cost avoidance gains.
Use industry benchmarks like the average cost of a data breach to estimate savings from avoided incidents based on the incident volume metrics you already have, or the potential cost of regulatory fines or penalties you might incur without proper controls in place.
Value not shown is value not seen. Don’t let your work be the best kept secret in the business.
Go further by analyzing ‘near miss’ events. These powerful case studies show what could have gone wrong without TPRM. By estimating the cost of these avoided incidents, you can demonstrate real-world savings and reinforce the value of your program.
It’s easy to focus on escalations and issues in your reporting to risk committees and senior managers but prioritise sharing the wins too. Share success stories and use scenario modelling to show how TPRM prevented costly outcomes. Real examples resonate more than abstract metrics.
Understanding what value looks like to your stakeholders
Risk professionals are often ahead of the curve, scanning the horizon for threats others don’t yet see. We build frameworks, assess supplier risks, and mitigate exposures that could derail operations or damage reputations. The value of TPRM is not always obvious to those holding the purse strings, but even the best strikers can’t win if the defence is leaking goals, so risk professionals must become advocates for their own impact.
To bridge this gap, invest in and develop dashboards and management information (MI) that clearly communicate your program’s
impact. Helping stakeholders understand the value you deliver will pay dividends in securing resource for your program and building the case for additional headcount or implementing new tooling and systems.
The next article in this series will explore how to overcome legacy challenges that hinder effective TPRM, from outdated systems to cultural resistance and consider what it takes to bring people on the journey and create a culture where risk management is everyone’s business.
After a powerhouse gathering in NYC, the industry’s most talkedabout third-party risk event heads to Dallas. Join leading risk, TPM, and compliance professionals as we tackle vendor risk at scale - Texas style.
FROM SMALL BEGINNINGS TO BIG DREAMS:
My Gloriously Chaotic, Occasionally Glamorous Journey
Hi there! I’m Seema Sabu, senior manager at a Big 4 firm PwC in the U.S., living with my wonderful family, paying my bills, managing my toddler’s meltdowns, and steadily climbing the ladder to partner. To an outsider, my life might look like one of those #Blessed Instagram reels shiny, happy, filter-perfect.
But let me stop you right there. I don’t believe in luck. Not because I’m a realist. But because I have no time for that kind of mythological characterization. Luck is the thing people say when they don’t want to acknowledge how much sweat someone put into building a life. “You’re so lucky!” they say. Really? Where was this ‘luck’ when I was attending interviews in a language I barely spoke? Or when I was chasing toddlers while writing code?
Let me take you back to where it all began. No red carpets. No silver spoons. Just a determined girl from a small city in India, where the idea of a ‘working woman’ was considered as shocking as pineapple on pizza.
From Girls’ School to Engineering School: The Plot Twist
I got into a toptier government engineering college in India but in another city (yay!) and promptly stayed home (oops!). My dad didn’t want his daughter to move out, so I enrolled in a local engineering college instead. Of 104 students in my branch, only five were girls. I came from a Hindimedium girls’ school, so walking into that classroom felt like landing on Mars without Google Translate.
But you know what?
Knowledge doesn’t speak only English. It speaks commitment. It speaks latenight cramming. And it especially speaks loudly when you’re trying to explain electronics engineering to someone who’s asleep in the front row.
The Charted Accountant Institute (CAI) and the Accidental Startup Chapter
During my third year, my cousin launched a startup coaching CA students and asked me to teach math and IT. Eight students turned into 100 the next year. I taught, advertised, counseled, and basically became the face of the franchise while still finishing my degree. Move over, Tony Stark.
I graduated with honors, feeling unstoppable. But then came the 2008 recession, and let me tell you, it was like applying for jobs with an invisibility cloak. I tried every placement agency in Bangalore (shout out to their patience), sent a million resumes, and even offered to work just for coffee. Nothing.
So, I returned to my hometown and rejoined the institute. It was booming, I was earning well, and people were calling me ‘Ma’am’. But… I wasn’t happy. I had a persistent itch like when you know your jeans don’t fit but you wear them anyway. I knew I wasn’t living my dream.
Delhi Diaries and Love Across Zip Codes
Two years later, I got a job with one of India’s largest telecom companies in Delhi. It felt like I’d finally cracked the code. I loved it. Three years in, I got married to a software engineer (predictable but solid choice). He was in Pune, I was in Delhi, and we thought, “Let’s find jobs in the same city and start our life together!” Spoiler: He got an offer from the U.S. Life had other plans.
So I moved to San Francisco with him. Sounds glamorous?
Yeah, no. Visa restrictions meant I couldn’t work. From working 12-hour days to suddenly folding laundry in a 1-bedroom apartment while my husband learned to drive on the right side of the road, I felt like someone had muted me in my own movie.
GRE, Baby Bottles, and
Big Dilemmas
Eventually, I had an idea to go back to school. I started preparing for the GRE after a six-year study break, and let me tell you, solving math problems while suffering jet lag is a great way to test your patience and caffeine tolerance.
In the middle of this, I got pregnant. And then came a beautiful baby boy, lots of diapers, and one massive decision. I got accepted into a fantastic master’s program at University of California Berkley, but with a newborn and no family nearby, how was I going to manage?
My husband, bless him, suggested sending our baby to India to stay with grandparents while I pursued my degree. I cried. Not just because I missed my baby already, but also because I realized I wasn’t willing to trade one dream for another. So, I turned down the master’s program. Not with regret. With resolve.
but…” emails than I can count, I finally got hired. Junior position. Modest title. But I walked in like I owned the building.
still showing up on time, in style, and with snacks.
What I Learned (Besides How to Fix Diaper Blowouts)
• Luck isn’t real. Persistence is.
• People will doubt you, especially when you’re quiet about your hustle.
• It’s okay to cry in the bathroom - just don’t live there.
• Your dreams don’t expire just because your life took a detour.
• And most importantly, never let anyone tell you it’s too late.
Dear Dreamers, Wherever You Are
To anyone reading this in a small town, a new country, or a career slump: your journey isn’t over. You’re not “behind.”
You’re in progress. Maybe your staircase looks like a jungle gym right now, but hey, the view is still worth the climb.
Get out of your comfort zone. Take your shot. And if all else fails laugh, learn, and try again tomorrow. You’re not lucky or unlucky. You’re capable. Keep going.
DROWNING IN DATA OR SURFING THE WAVE
The New Generation’s Battle to Stay Afloat
Chandrakant is First Vice President, Lead Model Validator at Flagstar Bank, New York. He has more than 15 years’ experience in Financial Risk Management (Market and Credit risk) and has previously worked with business consulting firm Genpact.
In the early 19th century, Augustin-Louis Cauchy was one of the last mathematicians known to contribute meaningfully across all major domains of mathematics known at the time. Not because smarter people stopped being born, but because the field became too vast for one mind to master.
After Cauchy, mathematics became too vast, too deep, and too rapidly evolving for any single person to master it all.
Today, we’re facing a similar moment, but this time across all domains of knowledge and human endeavor.
In every era of human advancement, the older generation has looked upon the next with a mix of admiration and concern: “How will they ever manage all this complexity?” But today, that concern feels sharper.
With the exponential growth of knowledge, rapid advances in artificial intelligence, and an ever-increasing demand for multi-disciplinary fluency,
the question isn’t just whether the next generation will keep up, it’s whether they’ll be able to get on the train at all.
The Escalating Demand Curve
In the past, being well-versed in mathematics was enough to enter many technical professions. Then came the need for statistics. Then programming. Now, a functional understanding of AI, cloud systems, ethics, and regulatory awareness is becoming part of the expected baseline. The learning curve isn’t just steeper, it’s evolving faster.
This isn’t just about content volume. It’s about the changing nature of learning. From memorization to simulation. From specialization to system thinking. From solitary mastery to human-AI collaboration. No one person will be able to master everything. The goalpost has shifted from mastery to navigation.
Will They Be Left Behind?
If someone stops learning today, they risk being left behind in 3-5 years. Not because the content is too difficult but because the very vocabulary of relevance will change. Terms like ‘vector databases’, ‘chain-ofthought prompting’, or ‘multi-agent LLM orchestration’ will be common workplace language. Without a
minimum level of fluency, re-entry will feel like jumping into a river midcurrent.
But what about the young and curious? Those who are entering this world with an open mind and willingness to learn? Will they be able to cope?
Yes, But the Game Has Changed The future won’t require knowing everything. It will require knowing how to learn anything, how to think systemically, and how to work with tools that extend cognitive capacity.
AI isn’t a threat to their learning; it’s their amplifier.
The next generation will:
• Use AI tutors to grasp complex ideas in real-time.
• Simulate systems and visualize ideas instead of memorizing them.
• Collaborate with human and machine agents alike.
• Rely less on recall and more on judgment, ethics, and creativity.
What Needs to Change
To empower them:
• Education must shift from lectures to labs, from theory to practice.
• Mentorship must move from giving answers to building frameworks.
• Systems must support exploration, failure, and iteration not just compliance.
We must teach them how to think, not what to think. How to adapt, not just how to excel in a fixed system.
The Role of Our Generation Our role isn’t to pass down a finished playbook. It’s to:
• Model continuous learning.
• Embrace humility in the face of change.
• Create environments where asking better questions is valued more than having the perfect answer.
We won’t be able to shield them from the avalanche of knowledge coming at them. But we can help them build the tools mental, emotional, and technical to navigate it.
So, what does all of that mean for those of us preparing to hand over the risk management reins?
In truth, there are so many possible outcomes, and so many potential twists and turns in an increasingly rapid technology and data evolution that the future of data and how we deal with it in a risk context isn’t clear or even predictable.
But in my view, the real question isn’t whether the next generation will be overwhelmed. It’s whether we’ll equip them to ride the wave or leave them to find their own way through it.
Why Attend?
• Learn from Europe’s leading treasury minds at ING, Société Générale, ABN AMRO, Commerzbank, Raiffeisen Bank, and more.
• Tackle the real challenges — from volatile interest rates and liquidity stress to AI-driven forecasting, FTP frameworks, and climate-aligned treasury strategies.
• Future-proof your function with insights into regulatory shifts, digital assets, and how to position treasury as a true strategic partner to the business.