Artificial Intelligence Chatbots May Pose Risks for Financial Institutions, CFPB Says


3 minute read | June.08.2023

The Consumer Financial Protection Bureau (CFPB) warned financial institutions this week about the risks of using AI-powered chatbots to communicate with consumers. The Bureau advised financial institutions to “consider the limitations of the technology” given that chatbots can:

  • Provide incorrect responses.
  • Pose information privacy and security concerns.
  • Waste consumers’ time in “doom loops” as they try to get answers or assistance. 

The CFPB’s “issues spotlight” cited three key risks from chatbots, including those fueled by large language models and other complex technology:

  1. Chatbots raise the possibility of noncompliance with federal consumer financial laws: “Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights, or it may fail to protect their privacy and data.”
  2. Chatbots may diminish customer service and trust:  “When consumers need help from their financial institution, the circumstances could be dire and urgent. If they get stuck in loops of repetitive, unhelpful jargon, unable to trigger the right rules to get the response they need, and they don’t have access to a human customer service representative, their confidence and trust in their financial institution will diminish.”
  3. Chatbots could cause consumer harm:  “The stakes for being wrong when a person’s financial stability is at risk are high. . . . Providing inaccurate information regarding a consumer financial product or service, for example, could be catastrophic. It could lead to the assessment of inappropriate fees, which in turn could lead to worse outcomes such as default, resulting in the customer selecting an inferior option or consumer financial product, or other harms.”

The post also flagged the negative impact chatbots can have on consumers with limited English proficiency. It noted that “technology trained on a limited number of dialects makes it difficult for consumers with diverse dialect needs to use chatbots to receive help from their financial institution.”

The CFPB based its post on publicly available studies, news articles, press releases and the Bureau’s own complaint database – which it leveraged to cite several complaints as examples of chatbots causing consumer confusion, frustration and harm.  The CFPB named a number of large financial institutions that use chatbots, virtual assistants and similar generative technology.

The post encouraged people and companies to take these actions:

  • Financial institutions should use chatbots “in a manner consistent with the customer and legal obligations.”
  • Consumers should submit complaints to the CFPB if they are unable to get “answers to their questions due to a lack of human interaction.”
  • Employees of financial service providers should remember that they can blow the whistle to the CFPB on suspected violations of consumer financial laws.

The CFPB post is the latest salvo in its ongoing effort to oversee and limit the “shift away from relationship banking and toward algorithmic banking.”  Notably, the CFPB did not try to balance its concerns about chatbots by acknowledging that human customer service representatives may make similar mistakes.

Financial institutions supervised by the CFPB should prepare for examiners to ask about how they use chatbots and other AI-based communication tools, as well as internal controls to prevent deceptive communications and other legal violations.

Please reach out to one of the authors if you have questions about the CFPB’s issue spotlight on chatbots. Our team advises clients on the compliant use of AI in financial services, and we would be happy to talk through how this recent post may impact your clients as well.