Managing AI Risk: 3 Laws Companies Using Consumer Data for AI Development Need to Know


2 minute read | May.16.2023

Artificial Intelligence (AI) is transforming financial services, from underwriting and trading securities to customizing financial products and services. These innovative modeling techniques may enhance the accuracy of models used to identify potential customers and assess their potential risks, expand access to credit by underserved populations, and reduce losses and other associated costs.

However, as highlighted by the Federal Trade Commission’s (FTC) article, The Luring Test: AI and the Engineering of Consumer Trust, AI may also carry significant consumer and commercial risks. As companies contemplate novel uses of AI, here’s what you need to know about the FTC’s focus on three “laws important to developers and users of AI” which may impact your business:

  1. Fair Credit Reporting Act, including its implementing Regulation V. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
  2. Equal Credit Opportunity Act, including its implementing Regulation B. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance; and
  3. Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.

The FTC, among other regulators, will likely continue to use its authority and expertise to exercise jurisdiction under these statutory regimes and focus on companies’ use of AI. Given the growing concerns about the use of new AI tools, the FTC provides several cautionary considerations for businesses operating in this space:

  • Firms building or deploying AI tools should staff personnel devoted to ethics and responsibility for AI and engineering.
  • A company’s risk assessment and mitigation programs should factor in foreseeable downstream uses and should incorporate training staff and contractors, among others.
  • Firms should monitor, document and address the actual use and impact of any AI tools eventually deployed, including an exit strategy if deployment does not go as planned.
  • If a company wants to convince the FTC that it adequately assessed risks and mitigated harms associated with use of AI, a reduction of personnel devoted to AI ethics may not be persuasive.

It’s clear that FTC staff and other regulators are focusing on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers. If you use or are contemplating using AI tools as part of your product or service offerings, now is the time to examine the three laws the FTC deems important to developers and users of AI.