
5 minute read
Are Banks Ready for AI Accountability?
As regulators start holding banks accountable for their AI systems, the question arises: how deeply should banks understand and manage these technologies? What are the challenges in ensuring AI is implemented ethically and securely? Bankers face the underlying fears of mismanagement and reputational risk.
Artificial intelligence. AI. We hear these keywords daily on the news. If you run a quick search on AI, you will find articles ranging from “Massive AI Growth is Outpacing U.S. Power Capacity” to “How AI is Boosting Drug Discovery” to “How AI is Wreaking Havoc of Music Fans”. So how is AI being used in banking? I may not know the totality of that answer, but what I do know is that regulators are going to hold banks responsible for it.
When I first read the FDIC’s March 2024 Consumer Compliance Supervisory Highlights, I found myself reading pages fifteen (15) and sixteen (16) saying, “On June 9, 2023, the FDIC, FRB, and OCC (collectively, the agencies) issued final guidance on managing risks associated with third-party relationships (FIL-29-2023). The guidance provides sound principles that support a risk-based approach to third-party risk management that banking organizations may consider when developing and implementing risk management practices for all stages in the life cycle of third-party relationships. This guidance replaces the agencies’ existing guidance on this topic, providing a consistent approach to managing risks associated with all third-party relationships. Banks can use this guidance as a resource in overseeing its third-party relationships.”
What does that mean? What is the life cycle? Does this include AI?
While prudential regulators have consistently avoided specifically addressing AI, because the “broad-based scope of the guidance captures the full range of third-party relationships”, it made me wonder what is happening with banks and AI during examination. I started asking bankers what they had experienced on the issue and their response was clear; examiners are asking about it during the review cycle and expecting us to know how it works and if it is discriminatory. My concern increased.
When I began asking prudential regulators about the expectations regarding banks and AI, I realized that many of the aforementioned questions did not matter; banks are responsible for all their vendors and should know everything about AI.
I asked one regulator directly: “So banks bear the entire burden of third parties and their use of AI whether they know it or not.” The response was: “They should know it and they are responsible.”
Wait. Isn’t there a four-letter agency that’s supposed to protect consumers that should be overseeing this? Couldn’t they publish a list of problem vendors? Why are banks always bearing the burden of compliance for everyone else?
The answer to those questions is that the bureau that’s supposed to protect consumers is focused on other things. The agencies are not publishing a list of problem vendors, “but a Google search will help,” and yes, banks always bear the burden. Well, isn’t that just great?! One more thing to worry about.
The reality is that prudential regulators are going to examine all banks on AI. It could be for safety and soundness, fair lending, UDAAP (which I have yet to find anyone who can clearly define), or any other regulation examiners see fit. So what should bankers do to stay out of trouble?
The answer to that is that banks have to change their vendor management systems. Banks must now ask third-party vendors if they are using AI. Not just at the beginning of the contract but throughout the life cycle of the relationship. If the vendor is using AI, the bank should ask how the vendor is using AI and what fields it is using. There should be a sample test to analyze the vendor’s impact on different demographics and questions on how the vendor is ensuring that discrimination is not taking place. This all has to be considered in addition to traditional third-party vendor questions such as data security and safety.
I heard one banker ask a regulator about what his institution should do if it somehow finds out that a third-party vendor is causing issues for its AI use. The response was direct and simple: terminate the contract and report. The regulator added that the length, cost, and duration of the contract did not matter. Get out of it. Easier said than done and potentially crippling to a bank if it comes from a core provider. Nonetheless, regulators move on.
While I disagree with the prudential regulators that bankers should be held responsible for third-party vendors use of AI, in reality, it seems unlikely that the prudential regulators are going to change their minds. What that means for us is that we have to be even more diligent when I did not even think that was possible.
Oftentimes I feel like the bearer of bad news when it comes to regulation and this column is no exception. However, as others in the industry study AI and its effects on banking, and hopefully decide to directly oversee third parties, I will remain optimistic that our bankers will continue doing their work as the best bankers in the country and keeping the bad actors utilizing AI negatively out of our industry just as they have other bad actors for decades.