We Need To Talk About Third-Party AI Risk

Print Friendly, PDF & Email
CCG Catalyst Commentary

We Need To Talk About Third-Party AI Risk

By: Kate Drew

March 11, 2025

Banks and credit unions are exploring the impact of AI at different paces. But as we go through this process as an industry, it is important to remember that involvement with AI is not necessarily limited to a financial institution’s (FI’s) initiatives. FIs do not exist in a vacuum, they are part of an ecosystem of solutions and technology providers. So, even if an FI doesn’t currently have an AI strategy, that doesn’t mean it has no exposure. Third-party AI risk refers to the risk a bank or credit union faces through use of AI by its vendors.

According to bankers we’ve talked with, many vendors have implemented AI capabilities, and they don’t always notify their clients or allow them to turn those capabilities on and off. This exposes FIs to risks like bias in algorithms and data security issues they may not be aware of. It’s a problem especially pertinent for FIs that control their own infrastructure, but implications for mainstream institutions are there, as well. As stated in the Financial Stability Oversight Council’s 2024 Annual Report:

“Financial institutions’ relationships with service providers may introduce new risks or amplify existing ones, in part because reliance on a third party may reduce an institution’s access to, and its direct control and oversight of its data or systems.”

It then goes on to say:

“Smaller firms, such as community banks and credit unions, may have lesser negotiating power to obtain certain contractual rights, fewer resources and ability to conduct due diligence on and monitor a service provider’s practices (e.g., information security, internal controls, assurance testing), and lesser ability to terminate and substitute services in case of operational challenges. And yet, due to their relatively small size, these institutions are increasingly relying on third parties for essential lending, compliance, technology, and operational-related matters.”

As time goes on, this issue will only grow more apparent for FIs of all sizes. While many FIs may depend on a third-party review, some will conduct their own audits of their vendor ecosystems and put processes in place for testing and approval. Regardless of approach, here are a few considerations, based on our research:

  1. Understand your ecosystem. It is important to understand vendors’ use of AI and how that might impact the institution. This includes understanding how models are built, which large language models (LLMs) are used (if any), and whether critical systems are impacted. If it is determined that a particular vendor’s use of AI is material, that should kick off another set of workflows that flows into your AI governance policy.

  2. Ensure uniform standards. Any vendor that is using AI needs to be reviewed and approved against a set of standards to make sure their use of the technology is compliant with bank policies and data security standards. The more complex the organization, the more important it is to have a dedicated process to drive efficiency — one bank we interviewed said it currently had 50-60 existing vendors implementing generative AI that it was in the process of approving.

  3. Include AI in ongoing monitoring. AI is a rapidly evolving field, especially when it comes to generative AI, powered by LLMs. As such, it is key that banks and credit unions include AI-specific attributes in their ongoing monitoring of vendors. They must continue to underwrite these activities to ensure that any changes are aligned with the organization’s requirements.

While you may be able to argue that not every bank or credit union needs an AI strategy (yet), executives should not ignore the implications of the technology permeating through their ecosystems. For some institutions, the risk will be lower because they have fewer vendors and therefore fewer exposure points, but it is unlikely to be zero. A good first step is to create an AI governance policy that, at a minimum, defines how employees use tools like ChatGPT and Microsoft’s Copilot, if available. Meanwhile, for progressives, taking stock of your environment, and sooner rather than later, is a worthwhile exercise. 

Subscribe to our Insights