In recent years, the use of artificial intelligence (AI) has proliferated across industries. With the widespread adoption of this technology, many business decisions are now made in consultation with, if not reliance upon, AI and what has come to be known as profiling or automated decision-making. Concurrent with the rise in AI, there has been an increase in state privacy laws in states across the country.
AI is dependent on huge data sets which can include personal information, including sensitive personal information. Consequently, privacy laws have become a primary means to address the risks inherent in relying on AI to make decisions that have legal and social consequences such as loan approvals or employment decisions. These laws seek to ensure that AI uses personal information in a responsible manner that seeks to put the consumer in control of their personal information when such information is used for automated decision-making. But they also create new obligations for organizations to assess and potentially comply with. Companies employing artificial intelligence should therefore be cognizant of the specific requirements of the laws to which they are subject. With a handful of new state privacy laws coming into effect in 2023, we have outlined key takeaways for companies that are currently using or plan to roll out AI in their business.
While the California Consumer Privacy Act (CCPA) is silent about automated decision-making, the California Privacy Rights Act (CPRA) (which amends the CCPA), the Colorado Privacy Act (CPA), the Virginia Consumer Data Protection Act (VCDPA), and the Connecticut Data Privacy Act (CTDPA) all grant consumers rights regarding opting out of the processing of their personal information for purposes of profiling and create requirements that impact automated decision-making.
Though the definitions of automated decision-making and profiling differ slightly across the state privacy laws, profiling generally refers to an organization attempting to evaluate personal aspects of a data subject via the processing of their personal information. Relatedly, automated decision-making refers to an organization either (i) acting upon profiling to make a decision by automated means without human intervention or with limited human intervention or (ii) establishing an automated system that renders a decision based directly on information provided by a data subject (such as an age gate that would prevent anyone under a certain age from being able to participate in a program or apply for a position).
While the CCPA is silent on automated decision‑making, the CPRA, which becomes effective on January 1, 2023, and amends and expands the concepts in the CCPA, directly addresses automated decision-making. The CPRA added a new definition of “profiling,” giving consumers opt-out rights with respect to businesses’ use of “automated decision-making technology,” which includes profiling consumers based on their “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.”
The CPRA does not limit the language to profiling and designates the right to be defined by the California Privacy Protection Agency (CPPA). The CPPA is charged with adopting regulations “governing access and opt-out rights with respect to businesses’ use of automated decision-making technology,” including providing meaningful information about the logic of the decision and the likely outcome with respect to the consumer. Importantly, the CPPA’s mandate to issue these regulations is broad and not currently limited to “solely” automated decisions or those with legal effects. To date, the CPPA has not published rules that address automated decision-making.
For more information on the CPRA, see Orrick’s CPRA On The Way Tool.
CPA, VCDPA, and CTDPA
The VCDPA, which also becomes effective on January 1, 2023, and the CPA, which becomes effective on July 1, 2023, will enable individuals to opt out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer, which is generally defined as the denial and/or provision of financial and lending services, housing, insurance, education enrollment or opportunities, criminal justice, employment opportunities, healthcare services, or access to basic necessities. The CTDPA provides an opt-out right that is similar to Colorado’s and Virginia’s, but only for “solely automated decisions.”
By way of comparison, the VCDPA’s definition of “profiling” aligns with that of the CPRA and includes a right-to-opt-out provision that is identical to the CPA’s right to opt out that allows consumers to opt out of having their personal information processed for the purpose of profiling in the furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
The VCDPA, CPA, CPRA amendments, and the CTDPA all require data controllers to conduct a data protection impact assessment (DPIA) for processing activities the present a “heightened risk of harm to a consumer.” A heightened risk generally includes:
These DPIAs must identify and weigh the risks and benefits of the processing to consumers, the controller, other stakeholders and the public at large that may flow from the processing, as mitigated by safeguards employed to reduce such risks. They are not intended to be made public or provided to consumers. Instead, the DPIAs must be made available to the state attorney general upon request, pursuant to an investigative civil demand. If companies identify a heightened risk in relation to any processing of personal information carried out by AI, they will now need to conduct DPIAs.
Not All State Privacy Laws Target Automated Decision-Making or Profiling
The Nevada Privacy Law (NPL) is silent on the topic of automated decision-making and profiling and the Utah Consumer Privacy Act (UCPA), which will go into effect on December 31, 2023, does not provide a right for consumers to opt out of profiling and does not require businesses to affirmatively assess data processing with “a heightened risk of harm,” such as the use of sensitive data and profiling.
If your business is using AI with underlying data that includes personal information, you should carefully assess how you collect and use personal information and sensitive information and make sure to comply with the various requirements of state privacy laws. For more information about the requirements under specific state privacy laws, see Orrick’s Insight on U.S. State Consumer Privacy Guide.
For more information on best practices for building your AI compliance program, see Orrick’s Insight on AI Tips: 10 Steps to Future-Proof Your Artificial Intelligence Regulatory Strategy.
Stay tuned for updates as state privacy laws and accompanying regulations around AI systems are rolled out over the coming months.