AI Model Diligence: 3 Steps for Financial Institutions to Manage Model Risk


4 minute read | July.07.2023

Artificial intelligence (AI) offers financial institutions the opportunity to enhance operational efficiency, customer experiences and financial and other risk management. However, these models must be subjected to appropriate diligence and validation to mitigate the risk that they will produce inaccurate, biased or inappropriate outputs.

Whether deploying chatbots to handle customer service inquiries, leveraging machine learning models to detect fraud or using complex AI-based credit scoring models to better assess creditworthiness, financial institutions should follow three key steps to manage their model risk:

  1. Diligence at implementation
  2. Ongoing validation
  3. Comprehensive model governance.

What Are AI Models?

Every artificial intelligence tool is rooted in a model, which the Office of the Comptroller of the Currency (OCC) defines as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” Models include:

  • An information input component, which provides data and baseline rules to the model.
  • A processing component, which synthesizes this information and generates estimates.
  • A reporting component, which relays this estimate in a form relevant to the business.

Model risks can arise when fundamental errors in the information input into the model result in inaccurate outputs or when an institution uses a sound model incorrectly or inappropriately.

Three Steps for Managing AI Model Risk

In assessing AI model risks, a financial institution’s guiding principle should be effective challenge. This requires critical analysis of the model by objective, informed parties to identify and evaluate the assumptions and limitations a model presents.

Federal regulators at the OCC, the Federal Reserve Board (FRB) and the Federal Deposit Insurance Corporation (FDIC) have issued model risk management guidance that directs institutions to implement controls at three key stages of model adoption and use:

  1. Diligence at Implementation: Institutions adopting AI models should know and document each model’s purpose and intended use as well as the methodologies and processing concepts underlying the model. Understanding a model’s purpose and intended use requires comparing the theory underlying the model with alternative theories and approaches to ensure the model is appropriate for the intended use. This often involves understanding the model’s training data and any potential biases or other known weaknesses associated with the data set.

    Additionally, institutions should test the model’s performance before implementation to make sure it performs in line with its underlying theory and aligns with expectations. This may require a qualified employee to conduct a subjective review, particularly where generative AI produces content that will be shown to customers or consumers.

    When using a model created or licensed by a vendor, institutions should assess whether the tool is appropriate for its products, exposures and risk level by asking the vendor to provide information about the product components, design and intended use. If a vendor claims this information is proprietary, institutions should consider whether they have sufficient information to assess the risks posed by using the model.

  2. Ongoing Validation: An AI model, like any model, should be validated to the extent possible―which may vary depending on the model’s complexity. Validation involves periodically reviewing the performance of each AI tool. Validation reviews may be needed more frequently for AI models than other types of models if the models have machine learning capabilities that permit changes to the underlying algorithms. Institutions should test model performance and components to assess how the model performs compared to alternatives (e.g., the prior non-AI model), the same model across time and expectations about how the model should perform. Where possible, institutions should seek to identify and correct errors, assess reliability and detect deterioration of the model. Many models lend themselves to benchmarking against alternatives. For example, an institution could compare the accuracy, effectiveness and legal compliance of a credit-scoring tool to those same attributes for a model drawing upon FICO scores to assess creditworthiness.

  3. Comprehensive Model Governance: Institutions should draft comprehensive policies defining risk management activities regarding AI model implementation and oversight, including policies requiring Board and senior management oversight and approval. Policies should allocate risk management roles and responsibilities with respect to model oversight. Companies also should regularly carry out internal audits to ensure all parties are carrying out their roles. Institutions should also maintain a model inventory that compiles information about past, present and future versions of each AI model, including the types and sources of informational inputs, the outputs and intended uses of the model(s) and assessments about whether each model is functioning as expected.

By following this three-step paradigm for model risk management, institutions can more effectively perform diligence on AI models to ensure their safe and reliable use in offering financial products and services.