AI Tips: 10 Steps to Future-Proof Your Artificial Intelligence Regulatory Strategy


July.01.2021

Artificial Intelligence (AI) regulation is coming, and companies considering implementing AI or already utilizing AI should be prepared. Below are 10 steps that companies can take now to harness the benefits of AI in a fair and equitable manner that should limit regulatory issues.

  1. Recent regulatory activity
  2. Know what is and isn’t considered AI
  3. Understand different regulatory requirements for providers and users
  4. Recognize how your AI will be classified
  5. Design your AI model with data quality in mind
  6. Build a robust documentation system for all AI operations
  7. Embrace transparency
  8. Champion accountability and human oversight
  9. Monitor risk and take corrective action when you find problems
  10. Establish a voluntary code of conduct for AI

1. Recent regulatory activity

This spring, the European Commission (Commission) published its highly anticipated communication and “Proposal for a Regulation laying down harmonized rules on artificial intelligence” (EU Regulation). You can find Orrick’s guidance regarding the EU Regulation here. The AI Regulation was released days after the FTC published a blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI” (FTC Memo). Additionally, five regulatory agencies sent 17 questions to regulated financial institutions about their use of AI tools in March 2021. The release of AI-related guidance on both sides of the Atlantic makes clear that the global race to regulate AI has begun in earnest.

Compliance (and noncompliance!) with these regulations may have a steep cost for businesses in the coming years. The EU Regulation is intended to have extraterritorial effect and establishes the European Artificial Intelligence Board that will have significant authority to levy “dissuasive” fines for noncompliance of up to 6% of annual global turnover for certain breaches as well as the power to order AI systems to be withdrawn from the market. The inclusion of these GDPR-like penalties shows the EU is serious about regulating the burgeoning AI industry. The AI Regulation applies across all sectors (public and private) to “ensure a level playing field.” The proposal now goes to the European Parliament and the Council of Europe for further consideration and debate. Once adopted, the EU Regulation will come into force 20 days after its publication in the Official Journal. The EU Regulation will apply 24 months after that date, though some provisions may apply sooner.

Meanwhile, the FTC has made it clear that it will use its power under Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act to help ensure AI is used truthfully, fairly, and equitably in the United States.

2. Know what is and isn’t considered AI

The EU Regulation casts a wide net in defining AI as “software that is developed with one or more of the techniques and approaches listed in Annex 1 and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This essentially includes any software developed using machine learning. While the FTC has not specifically defined AI, it has issued reports on big data analytics and machine learning; conducted a hearing on algorithms, AI and predictive analysis; and issued business guidance on AI and algorithms. Understanding these materials provides a good sense of the FTC’s scope relating to AI regulation, which is generally directed towards its broad enforcement power to prevent fraud, deception, and unfair business practices.

3. Understand different regulatory requirements for providers and users

Subject to some specific exceptions, the EU Regulation applies to each of:

  1. Providers placing on the market or putting into service AI systems in the EU (regardless of where the provider is located);
  2. Users of AI systems located within the EU; and
  3. Providers and users of AI systems that are located outside the EU, where the output is used in the EU.

In the United States, the FTC has not differentiated between providers and users but has shown it is ready to protect consumers by policing AI in an effort to ensure that algorithms remain “truthful, non-deceptive and backed up by evidence.”

4. Recognize how your AI will be classified

The EU Regulation employs a risk-based approach with four levels of risk. Businesses are encouraged to identify which level or levels of risk their AI falls into based on the below:

  • Prohibited: This is a limited set of AI uses deemed to violate fundamental EU rights such as AI systems that: (i) deploy subliminal techniques beyond a person’s consciousness; (ii) exploit vulnerabilities of specific groups of people to materially distort behavior in a manner that causes or is likely to cause physical or psychological harm; (iii) are used by public authorities to score and classify people; and (iv) use real-time remote biometric identification in public spaces for law enforcement purposes (subject to limited exceptions).
  • Highly Regulated “High Risk”: An AI system will be “high risk” if it creates a high risk to the health and safety or fundamental rights of natural persons. For example, in line with existing product safety legislation, AI used as a safety component of a product (or which is, itself, such a product) will likely qualify as “high risk” under the AI Regulation. Other “high-risk” AI systems are set out at Annex III of the AI Regulation, which the Commission can review in order to align them with the evolution of AI use cases (i.e., future-proofing). This includes (i) “real-time” and “post” remote biometric identification; (ii) evaluating an individuals’ creditworthiness (except where used by small-scale providers for their own use); and (iii) the use of AI systems in recruitment and promotion (including changes to roles and responsibilities) in an employment context.
  • Limited Risk: This category calls for enhanced transparency around certain other AI systems. Obligations include:
    • Informing users when they interact with an AI system such as chatbots or their emotions or characteristics are recognized through automated means; and
    • Informing individuals that content has been generated through automated means if an AI system creates “deep fakes” by manipulating image, audio, or video content that resembles authentic content.
  • Minimal Risk: This category is for all other AI systems and has no mandatory requirements. However, per Section 10 below, voluntary codes of conduct can safeguard all other AI systems and create a culture of compliance around your entire suite of AI offerings.

5. Design your AI model with data quality in mind

Data sets must be “relevant, representative, free of errors and complete” under the EU Regulation. It is critical to build and iterate AI models based on fulsome datasets that include all necessary populations in order to achieve accuracy and fairness. The FTC Memo similarly implores businesses to “from the start, think about ways to improve your data set, design your model to account for data gaps, and—in light of any shortcomings—limit where or how you use the model.” Businesses should take these data quality issues to heart in evaluating training data sets and identifying future sources of data.

6. Build a robust documentation system for all AI operations

Consider having technical documentation, policies, and automatic logs in place that are followed by employees who build, test, and interact with AI systems. Create instruction manuals for your AI system that accurately describe the proper intended operation of your systems. These manuals and documentation will be a valuable resource to prove your good intentions and careful work should your AI system be accused of bias or otherwise attract regulatory scrutiny. Make an effort to meet your computer and data scientists on their turf by creating reasonable and user-friendly prompts for recordkeeping to spur compliance.

7. Embrace transparency

Ensure your AI operations are transparent so as to enable users to interpret the output and use it appropriately. On April 8, 2020, the FTC’s Bureau of Consumer Protection issued guidance on the use of artificial intelligence and algorithms, which advises that “the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.” Consider taking the following steps to show that your company champions both transparency and AI:

  • When designing AI, expressly inform users when they are interacting with an AI system, unless this is obvious, and fully disclose your data-collection practices to your customers, e.g., don’t use customer photos to train your algorithms without their consent.
  • Don’t exaggerate what your algorithm can do. The FTC Memo encourages businesses to monitor their statements to customers and consumers to ensure they are “truthful, non-deceptive, and backed up by evidence,” or risk enforcement under the FTC Act.
    • The FTC’s 2019 complaint against Facebook alleged that the social media giant misled consumers by telling them they could opt in to the company’s facial recognition algorithm, when in fact Facebook was using their photos by default. The FTC’s recent action against app developer Everalbum reinforces that point, where Everalbum used photos uploaded by app users to train its facial recognition algorithm. The FTC alleged that the company deceived users about their ability to control the app’s facial recognition feature and made misrepresentations about users’ ability delete their photos and videos upon account deactivation. To deter future violations, the FTC’s settlement order from May 6, 2021, requires the company to delete not only the ill-gotten data, but also the facial recognition models or algorithms developed with users’ photos or videos. Additionally, Everalbum must obtain consumers’ express consent before using facial recognition technology on their photos and videos.

8. Champion accountability and human oversight

Margrethe Vestager, the European Commission’s Executive Vice President, stated that, when it comes to “artificial intelligence, trust is a must, not a nice to have.” This is nowhere truer than in relation to accountability for AI systems. Under the EU Regulation, each AI system must be able to be effectively overseen by human operators to “minimize risks to health, safety or fundamental rights when the AI system is used in accordance with the intended purpose or reasonably foreseeable misuse.” The FTC Memo states that if businesses fail to hold themselves accountable, “the FTC may do it for [them].” Companies should consider appointing a lead AI risk officer to streamline accountability throughout the organization and adopt responsive policies.

9. Monitor for risk and take corrective action where you find problems

Companies should consider establishing a risk management system, including frequent testing, for the life cycle of the AI systems and implement a quality management system with written policies, procedures, and instructions in order to prevent discriminatory outcomes, security breaches, or other foreseeable negative outcomes.

Always take necessary corrective actions if you have a reason to consider your AI to be plagued by bias or unfairness or otherwise at risk of regulatory scrutiny.

10. Establish a voluntary code of conduct for AI

Establishing a voluntary code of conduct, even for low-risk AI systems, provides a roadmap for your AI team as well as business and legal personnel to build and manage compliance as you roll out AI offerings. Also, promoting an Ethics-by-Design program or instituting an AI Ethics training spurs collaboration between those AI professionals that build the models and those tasked with oversight and compliance. By communicating a tone from the top that promotes transparency and ethical behavior around AI, you can build stronger and more equitable algorithms and reduce the likelihood of regulatory risks.