Artificial Intelligence Regulation Takes Shape


November.18.2021

Artificial Intelligence (AI) has the potential to create breakthrough advances in a range of industries. It also raises legal and ethical questions that will likely define the next era of technological advancement.

Companies with AI-based products and services should understand the changing regulatory landscape. Here’s a look at some recent changes in Europe and the United States.

Europe

The European Commission proposed a regulation (EU AI Act) in 2021 to harmonize AI rules. It takes a risk-based approach to controls on using AI systems, depending on the intended purpose of the system. The EU AI Act proposes a sliding scale of rules based on risk that would classify AI applications as unacceptable, high, limited or minimal risks (See our breakdown of the proposed European framework.)

The proposal will become law once the European Commission and the European Parliament agree on a common version. Negotiations are expected to be complex, with thousands of amendments already proposed by political groups in the European Parliament. Once adopted, the regulation will apply across the EU, possibly as early as 2024.

If adopted, the regulation would have significant consequences for companies that develop, sell or use AI systems. Those consequences include the introduction of legal obligations and a monitoring and enforcement regime with hefty penalties for non-compliance. Specifically, the regulation will require companies to register stand-alone, high-risk AI systems, such as remote biometric identification systems, in an EU database. Potential fines for non-compliance range from 2-6% of a company’s annual revenue.

The regulation has striking similarities to the General Data Protection Regulation, or GDPR, which already carries implications for AI: Article 22 prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the user has explicitly consented or the AI system meets other requirements.

United States

Unlike the comprehensive framework proposed in Europe, regulatory guidelines have been proposed by several federal agencies in the United States as well as by several state and local governments. (Click here to learn how state privacy laws will affect AI.)

Here are key U.S. developments in AI regulation as well as ways companies can avoid potential regulatory pitfalls.

Department of Commerce / National Institute of Standards and Technology

A flurry of AI-related activity has emanated from the Department of Commerce, including a move towards a risk-management framework.

Congress has directed the National Institute of Standards and Technology, part of the Commerce Department, to develop “a voluntary risk management framework for trustworthy AI systems.” That framework may greatly influence how companies approach AI-related risks, including avoiding bias and promoting accuracy, privacy and security. The NIST said in its Principles on Explainable AI that AI algorithms should:

  • Have accompanying evidence or reason(s) for all outputs.
  • Be understandable to individual users.
  • Correctly represent how system generates the output.
  • Have confidence in output; operation under conditions for which the system was designed.

In September 2021, the Department of Commerce established the National Artificial Intelligence Advisory Committee to offer recommendations on the “state of U.S. AI competitiveness, the state of science around AI, issues related to the AI workforce” and how AI can enhance opportunities for underrepresented populations, among other topics.

Given its responsibilities and engagement with AI, the Department of Commerce appears poised to play a central role in the federal approach to AI regulation.

National AI Initiative Act

In January 2021 the National AI Initiative Act (U.S. AI Act) became law. It created the National AI Initiative, which provides “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies.” The U.S. AI Act created offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services

Algorithmic Accountability Act of 2022

If passed, the Algorithmic Accountability Act would require large technology companies to perform a bias impact assessment of automated decision-making systems in a variety of sectors, including employment, financial services, healthcare, housing, and legal services. Introduced in February 2022, the bill defines “automated decision system” to include “any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.” The bill seeks to improve the 2019 Algorithmic Accountability Act.

Federal Trade Commission

The FTC made clear in 2021 that it will pursue the use of biased algorithms. It provided a roadmap for its compliance expectations in saying companies should “keep in mind that if you don’t hold yourself accountable, the FTC may do it.” Among other things, companies should:

  • Rely on inclusive data sets: “Companies should think about ways to improve their data sets, design their model to account for data gaps, and—in light of any shortcomings—limit where or how they use the model.”
  • Test an algorithm before use and periodically afterwards “to make sure that it doesn’t discriminate based on race, gender, or other protected class.”
  • Be truthful about how they use customers’ data and not exaggerate an algorithm’s abilities.
  • Embrace transparency and independence.

In June 2022, the FTC said it plans to “ensure that algorithmic decision-making does not result in harmful discrimination.” An FTC report to Congress discussed how AI could combat online harms, ranging from scams, deep fakes, and opioid sales. The report also noted that AI is also susceptible to producing biased and discriminatory outcomes.

The White House

The U.S-E.U. Trade and Technology Council has committed to develop “AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values.” The council also plans to discuss “measurement and evaluation tools. . . to assess the technical requirements for trustworthy AI” and study the technology’s impact on the labor market.

In November 2021, the White House Office of Science and Technology Policy solicited engagement from stakeholders across industries in an effort to develop a “Bill of Rights for an Automated Society.” It could cover topics like AI’s role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system.

Food and Drug Administration

The FDA’s Artificial Intelligence/Machine Learning Based Software as a Medical Device is meant to treat, diagnose, cure, mitigate, or prevent disease or other conditions. An action plan outlines how the agency intends to oversee the development and use of the software.

National Security Commission on Artificial Intelligence and Government Accountability Office (GAO)

The National Security Commission on Artificial Intelligence recommended in 2021 that the government protect privacy, civil rights, and civil liberties in its AI deployment. It notes that a lack of public trust in AI from a privacy or civil rights/civil liberties standpoint would undermine the deployment of AI to promote U.S. intelligence, homeland security, and law enforcement. The commission advocates for public sector leadership to promote trustworthy AI, which will likely affect how AI is deployed and regulated in the private sector.

Also in 2021, the GAO identified practices to help ensure accountability and responsible AI use by federal agencies. It identified four key focus areas:

  • Organization and algorithmic governance.
  • System performance.
  • Documenting and analyzing data to develop and operate an AI system.
  • Continuous monitoring and assessment to ensure reliability and relevance over time.

EEOC

In May 20222, the Equal Employment Opportunity Commission warned companies that using algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by intentionally or unintentionally screening out individuals with disabilities.

New York City’s Biometric Data Protection Law

The New York City Biometric Identifier Information Law (NY Biometric Act) applies to the collection and processing of “biometric identifier information,” which is defined as a “physiological or biological characteristic … to identify, or assist in identifying, an individual,” such as a retina or iris scan, fingerprint or voiceprint, or a scan of hand or face geometry. The NY Biometric Act only applies to a “commercial establishment,” defined as a place of entertainment, a retail store, or a food and drink establishment. The law has two primary legal requirements:

  • Commercial establishments that collect, retain or share a customer’s biometric identifier information must disclose these activities “in plain, simple language” on signs near customer entrances.
  • Commercial establishments cannot “sell, lease, trade, share in exchange for anything of value or otherwise profit from the transaction of biometric identifier information.”

Next Steps: What Should Companies Do?

Regulators have sent a clear message that federal AI regulation is on the horizon. Companies should:

  • Craft policies and procedures to create a compliance-by-design program promoting AI innovation while ensuring transparency and explainability.
  • Audit and review usage periodically.
  • Document these processes to comply with regulators who may seek further information.

Click here for more information about steps that companies can take to harness the benefits of AI while limiting regulatory issues.

This article was published in November 2021 and updated in September 2022.