4 minute read | July.07.2023
Artificial intelligence (AI) offers financial institutions the opportunity to enhance operational efficiency, customer experiences and financial and other risk management. However, these models must be subjected to appropriate diligence and validation to mitigate the risk that they will produce inaccurate, biased or inappropriate outputs.
Whether deploying chatbots to handle customer service inquiries, leveraging machine learning models to detect fraud or using complex AI-based credit scoring models to better assess creditworthiness, financial institutions should follow three key steps to manage their model risk:
Every artificial intelligence tool is rooted in a model, which the Office of the Comptroller of the Currency (OCC) defines as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” Models include:
Model risks can arise when fundamental errors in the information input into the model result in inaccurate outputs or when an institution uses a sound model incorrectly or inappropriately.
In assessing AI model risks, a financial institution’s guiding principle should be effective challenge. This requires critical analysis of the model by objective, informed parties to identify and evaluate the assumptions and limitations a model presents.
Federal regulators at the OCC, the Federal Reserve Board (FRB) and the Federal Deposit Insurance Corporation (FDIC) have issued model risk management guidance that directs institutions to implement controls at three key stages of model adoption and use:
Additionally, institutions should test the model’s performance before implementation to make sure it performs in line with its underlying theory and aligns with expectations. This may require a qualified employee to conduct a subjective review, particularly where generative AI produces content that will be shown to customers or consumers.
When using a model created or licensed by a vendor, institutions should assess whether the tool is appropriate for its products, exposures and risk level by asking the vendor to provide information about the product components, design and intended use. If a vendor claims this information is proprietary, institutions should consider whether they have sufficient information to assess the risks posed by using the model.
By following this three-step paradigm for model risk management, institutions can more effectively perform diligence on AI models to ensure their safe and reliable use in offering financial products and services.