July.01.2021
Artificial Intelligence (AI) regulation is coming, and companies considering implementing AI or already utilizing AI should be prepared. Below are 10 steps that companies can take now to harness the benefits of AI in a fair and equitable manner that should limit regulatory issues.
This spring, the European Commission (Commission) published its highly anticipated communication and “Proposal for a Regulation laying down harmonized rules on artificial intelligence” (EU Regulation). You can find Orrick’s guidance regarding the EU Regulation here. The AI Regulation was released days after the FTC published a blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI” (FTC Memo). Additionally, five regulatory agencies sent 17 questions to regulated financial institutions about their use of AI tools in March 2021. The release of AI-related guidance on both sides of the Atlantic makes clear that the global race to regulate AI has begun in earnest.
Compliance (and noncompliance!) with these regulations may have a steep cost for businesses in the coming years. The EU Regulation is intended to have extraterritorial effect and establishes the European Artificial Intelligence Board that will have significant authority to levy “dissuasive” fines for noncompliance of up to 6% of annual global turnover for certain breaches as well as the power to order AI systems to be withdrawn from the market. The inclusion of these GDPR-like penalties shows the EU is serious about regulating the burgeoning AI industry. The AI Regulation applies across all sectors (public and private) to “ensure a level playing field.” The proposal now goes to the European Parliament and the Council of Europe for further consideration and debate. Once adopted, the EU Regulation will come into force 20 days after its publication in the Official Journal. The EU Regulation will apply 24 months after that date, though some provisions may apply sooner.
Meanwhile, the FTC has made it clear that it will use its power under Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act to help ensure AI is used truthfully, fairly, and equitably in the United States.
2. Know what is and isn’t considered AI
The EU Regulation casts a wide net in defining AI as “software that is developed with one or more of the techniques and approaches listed in Annex 1 and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This essentially includes any software developed using machine learning. While the FTC has not specifically defined AI, it has issued reports on big data analytics and machine learning; conducted a hearing on algorithms, AI and predictive analysis; and issued business guidance on AI and algorithms. Understanding these materials provides a good sense of the FTC’s scope relating to AI regulation, which is generally directed towards its broad enforcement power to prevent fraud, deception, and unfair business practices.
3. Understand different regulatory requirements for providers and users
Subject to some specific exceptions, the EU Regulation applies to each of:
In the United States, the FTC has not differentiated between providers and users but has shown it is ready to protect consumers by policing AI in an effort to ensure that algorithms remain “truthful, non-deceptive and backed up by evidence.”
4. Recognize how your AI will be classified
The EU Regulation employs a risk-based approach with four levels of risk. Businesses are encouraged to identify which level or levels of risk their AI falls into based on the below:
5. Design your AI model with data quality in mind
Data sets must be “relevant, representative, free of errors and complete” under the EU Regulation. It is critical to build and iterate AI models based on fulsome datasets that include all necessary populations in order to achieve accuracy and fairness. The FTC Memo similarly implores businesses to “from the start, think about ways to improve your data set, design your model to account for data gaps, and—in light of any shortcomings—limit where or how you use the model.” Businesses should take these data quality issues to heart in evaluating training data sets and identifying future sources of data.
6. Build a robust documentation system for all AI operations
Consider having technical documentation, policies, and automatic logs in place that are followed by employees who build, test, and interact with AI systems. Create instruction manuals for your AI system that accurately describe the proper intended operation of your systems. These manuals and documentation will be a valuable resource to prove your good intentions and careful work should your AI system be accused of bias or otherwise attract regulatory scrutiny. Make an effort to meet your computer and data scientists on their turf by creating reasonable and user-friendly prompts for recordkeeping to spur compliance.
Ensure your AI operations are transparent so as to enable users to interpret the output and use it appropriately. On April 8, 2020, the FTC’s Bureau of Consumer Protection issued guidance on the use of artificial intelligence and algorithms, which advises that “the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.” Consider taking the following steps to show that your company champions both transparency and AI:
8. Champion accountability and human oversight
Margrethe Vestager, the European Commission’s Executive Vice President, stated that, when it comes to “artificial intelligence, trust is a must, not a nice to have.” This is nowhere truer than in relation to accountability for AI systems. Under the EU Regulation, each AI system must be able to be effectively overseen by human operators to “minimize risks to health, safety or fundamental rights when the AI system is used in accordance with the intended purpose or reasonably foreseeable misuse.” The FTC Memo states that if businesses fail to hold themselves accountable, “the FTC may do it for [them].” Companies should consider appointing a lead AI risk officer to streamline accountability throughout the organization and adopt responsive policies.
9. Monitor for risk and take corrective action where you find problems
Companies should consider establishing a risk management system, including frequent testing, for the life cycle of the AI systems and implement a quality management system with written policies, procedures, and instructions in order to prevent discriminatory outcomes, security breaches, or other foreseeable negative outcomes.
Always take necessary corrective actions if you have a reason to consider your AI to be plagued by bias or unfairness or otherwise at risk of regulatory scrutiny.
10. Establish a voluntary code of conduct for AI
Establishing a voluntary code of conduct, even for low-risk AI systems, provides a roadmap for your AI team as well as business and legal personnel to build and manage compliance as you roll out AI offerings. Also, promoting an Ethics-by-Design program or instituting an AI Ethics training spurs collaboration between those AI professionals that build the models and those tasked with oversight and compliance. By communicating a tone from the top that promotes transparency and ethical behavior around AI, you can build stronger and more equitable algorithms and reduce the likelihood of regulatory risks.