RegFi Episode 74: AI, Audits & Risk Assessments: Inside California’s Final CCPA Regulations
26 min listen
Orrick partner Shannon Yavorsky — head of the firm’s Global Cyber, Privacy & Data Innovation group and co-lead of the AI practice — joins RegFi co-hosts Jerry Buckley and Sherry Safchuk to unpack newly approved California Consumer Privacy Act regulations. The conversation offers insights into the definitions and requirements for automated decision-making technology (ADMT), risk assessments, and cybersecurity audits, as well as open questions surrounding GLBA data carve-outs for financial services, and the reasons behind California’s shift from defining “AI” to regulating ADMT use cases.
Links:
|
Jerry Buckley: |
Hello, this is Jerry Buckley, and I'm here with RegFi co-host Sherry Safchuk. We are joined today by our partner, Shannon Yavorsky, who is the head of Orrick's Global Cyber, Privacy and Data Innovation Group, and also co-leads our Artificial Intelligence Group. On September 23rd, the California Office of Administrative Law approved significant new and amended regulations to the California Consumer Privacy Act as proposed by the California Privacy Protection Agency. These regulations introduce new requirements and update existing rules and focus on cybersecurity audits, risk assessments and, pretty importantly, automated decision-making technology, or ADMT. As our listeners will recall, we have dealt with AI regulation in the past in our discussions about the EU AI Act, and otherwise. And at the end of our discussion here, we'll try to put the regulation of AI in context. But let's begin by asking Shannon if you could provide us with a high-level overview of the regulations and give your thoughts on what prompted the CCPA to introduce these regulations on the subjects we've just referenced. |
| Shannon Yavorsky: | Absolutely. Well, thank you for having me. Let me set the stage first. So, on September 23rd, the California Privacy Protection Agency, better known as the CPPA, announced the new regulation under the California Consumer Privacy Act, the CCPA, that were officially approved. This was a long time coming. The CPPA had adopted them in July, and this actually marks the end of a rulemaking process that kicked off way back in 2024. So, what's in these new regulations? They cover three big-ticket items: cybersecurity audits, risk assessments and automated decision-making technology, or ADMT, plus some tweaks to existing CCPA rules. So why did the CPPA do this? Part of it is pretty straightforward. The CPRA, which amended the CCPA, required the AG to adopt regulations on a whole list of topics. I think there were 22 in total, including these three areas. So, in some ways, the CPPA's hands were tied. They had to produce these rules. But it's not just a box-ticking exercise. There are real goals articulated here, increasing transparency and consumer control over personal data, reducing unauthorized access and harm, promoting fairness by tackling discrimination risks in AI and ADMT, and most importantly, trying to sort of build trust with consumers. And they need to. One survey found that 81% of adults who had heard of AI believe companies using AI are using data in ways that people wouldn't be comfortable with. So, there's a big sort of trust gap there that this is potentially going to help to bridge. |
| Jerry: | Well, how do the regulations address the use of generative AI? And of course, the public is very aware of generative AI and potential hallucinations and what might affect their jobs. But how will these regulations affect generative AI and large language models? |
| Shannon: | Great question, because this is the one that everyone wants to know the answer to. How does this affect tools like ChatGPT? The short answer is at least not directly. The regulations are really aimed at automated decision making for what they call significant decisions. And this is directly language that was used in the GDPR. So that was kind of the blueprint for some of the language around automated decision-making technologies, which is sort of interesting history there. But significant decisions, think of things like whether you get a loan, a job, a spot at a university, or healthcare services. That's very different from using GenAI as a kind of better Google or personal assistant, which is how a lot of us interact with the large language models today. That said, there are some knock-on effects. For example, if a business provides ADMT to another business so the second business can make significant decisions, the first business has to share the facts needed for the recipient to do its own risk assessment. So, if you're training a model with consumer data for the purpose of making significant decisions, there needs to be some kind of risk assessment there. So, the bottom line is there are big limitations and disclosures come into play when AI is used for really impactful decision-making, not so much when it's being used to, like I've used it to draft sonnets about data processing agreements. |
| Jerry: | Yeah. Shannon, very importantly, you mentioned big decisions, and you cited, for instance, whether you're going to get a loan or a mortgage. I believe you cited that. Unlike some of the provisions of CCPA, where there's a Gramm-Leach-Bliley exemption, how might these ADMT provisions impact the financial services industry? |
| Shannon: | That's a really great question. So, there are lots of varying carve-outs for GLBA data and GLBA entities. And this is spread across the California Consumer Privacy Act, but also across all of the emerging panel of state privacy laws. And a huge topic right now is whether the state level carve-out is just for GLBA covered data or for the GLBA-covered entity. And I know, Sherry, this is your world, and you sort of talk about these carve-outs in a lot more detail. Do you have any thoughts on that one? |
| Sherry Safchuk: | Yeah, I mean I think that we're going to see the carve-outs for the entity exemption probably decrease and more data carve-outs, similar to the CCPA. There's a few other states that are following in the footsteps of California and I would not be surprised if we saw other states, including lending decisions or decisions for financial products and services in their AI or ADMT regulations. |
| Jerry: | And Sherry, if I could just pursue that with you for one more second. California obviously does not have an entity exemption. |
| Shannon: | Right. Just data. |
| Jerry: | Just data. So now when we come to ADMT, is that going to fall in the data exemption? Or are these rules going to be applicable to financial services firms that are dealing with consumers? |
| Sherry: | There was a lot of back and forth in the comments regarding ADMT. And I worked with a couple trade groups, financial services trade groups, to try and parse through what that GLBA data exemption means with respect to the ADMT regulations. In the initial version of the regs, the CPPA made it clear that this wouldn't apply to GLBA data. And then in subsequent proposed regulations, they took out the express exemption and then noted that, you know, the regulations can't revise CCPA. So, all GLBA data is exempt, but we don't need to include that in the regs. So, there was a lot of kind of confusion as to whether the ADMT requirements applied to financial products and services. And I think the comments make it clear that it doesn't, but they still left in that language about lending decisions. |
| Jerry: | More on this at another podcast. Now, but returning, Shannon, to the question of CCPA, how does it define artificial intelligence and why is that going to be important in compliance? |
| Shannon: | Oh, that's such a critical question. How do you define AI? This is something that held up the passage of the EU AI Act. And the central theme here is that it's challenging because AI is evolving so quickly. And no one wants the definition to be so broad that you end up regulating calculators, but no one wants it to be so narrow that you only regulate a narrow set of, you know, highly tuned AI systems. So, the CPPA actually pulled back here. Early drafts of the regulations included a pretty detailed definition of AI, and it went something like this. AI means a machine-based system that infers from the input it gets how to generate outputs, things like predictions, recommendations or decisions that can influence physical or virtual environments. And that definition explicitly included generative models like LLMs and facial or speech recognition. But in the final version of the regulations, that definition was gone. Instead, the rules zeroed in on ADMT as the operative term. And why does that matter? For compliance purposes, businesses don't need to worry about the broad sweeping definition of AI. The focus is narrower. So, it's on automated decision-making technology and whether your use case falls into that bucket. It gives businesses a little more clarity and avoids over-regulation of tools that don't actually make decisions. It's a really critical one. |
| Jerry: | Just digging a little deeper here, Shannon, automated decision-making technology, ADMT, how is that term defined and what are the implications for businesses using AI tools? |
| Shannon: | So, this is really the heart of it. ADMT is defined as any technology that processes personal information and uses computation to either replace or substantially replace human decision-making. So substantially replace is the key phrase here. If a human rubber stamps what the technology spits out without really analyzing it, it counts as ADMT. On the other hand, if a human is truly reviewing, analyzing, and has authority to change or override the technology's recommendation, then there's an argument you're not in that territory. What's covered? Profiling tools and AI that make or heavily influence decisions about people. What's not covered? Things like spellcheck or data storage, tools that don't replace human judgment. Now, if you're using ADMT for what the regs call a significant decision, things like employment, housing, health care, education, financial services, you've got obligations. You need to give consumers clear conspicuous notice before use, explain how the ADMT works and even offer them the ability to opt out. So, what's the practical takeaway for businesses? You can still use AI in a lot of different ways, HR screening, accounting, payroll, but don't let it make the final decision. And for me, this really goes to the through line in all of what we talk about in AI regulation, which is human in the loop. And that's where the line is drawn. |
| Jerry: | That's very helpful, and I thank you. I'd like to turn to Sherry for a second and say, what is the cybersecurity audit requirement? Since the requirements vary by business size, what companies are affected and what should they be doing to prepare themselves for this? |
| Shannon: | So that's a great question. And kind of at a higher level, what's interesting about these regulations is that not only did they address privacy and automated decision making, but they also added a cybersecurity component. And one that there was a lot of comments to about, like, why is it needed? Because there are other broader industry standards that could stand in place of the CCPA. CPPA disagreed with that and said basically this is their mandate and they're going to provide requirements related to cybersecurity audits. And so now the CCPA regulations require every business to perform an annual audit of the processing of consumers' personal information where it presents a significant risk to consumer security. And the idea is to audit all specific cybersecurity controls used — any of those components, any gaps, any weaknesses. And while there's some leeway in what a qualified auditor is and what the appropriate audit process is, it's going to be some combination of probably an internal individual, maybe a third-party external auditor to kind of assist companies with performing this cybersecurity audit. The requirements say that the auditor must be qualified, objective, and independent. And I think what independence means is that the internal audit function needs to be able to directly report to your CEO, your board, your audit committee. And that could be challenging for many of internal audit departments because they may not have these necessary qualifications to complete a cybersecurity audit. The other thing that the regulations did is it identified the components that audits must include. And while the rule doesn't say you must do A, B, C and D, it does provide a list of reasonable controls, which gives the impression that this is what the CPPA expects programs to have. and the standards by which auditors would report these potential gaps or weaknesses. So, we're talking about things like authentication, encryption, network monitoring or defenses, cybersecurity education and training, incident responses. And then the CCPA regulations set forth what must be included in that audit report, such as what are the systems being audited? What information is the auditor using to make these decisions to support their findings? And then these businesses must have an annual certification. So, there's a lot of levels or a lot of layers of complexity when it comes to the cybersecurity audit requirements. And I think we're just going to see that develop as technology develops. |
| Jerry: | Sherry, we're going to come later to the question of timing, but it does sound like there's going to have to be a lot of work between now and the time when people are able to comply. And business opportunities for people in the cyber risk space. What are the expectations for risk assessments? And how can businesses avoid treating them as check-the-box type exercises? |
| Sherry: | Thanks, Jerry. So, I want to kind of differentiate between what a risk assessment is versus what a cybersecurity audit is, because I've seen it used interchangeably. The cybersecurity audit is really focused on the cybersecurity aspects of protecting personal information. But when it comes to the risk assessment, it's not a cybersecurity risk assessment. It's a privacy impact assessment. So, it's sort of a different concept where the business must perform this risk assessment any time they are processing a consumer's personal information that presents significant risk to a consumer's privacy. And what does significant risk mean? It means selling or sharing personal information. It means processing sensitive personal information. And then kind of looping this back into what Shannon was talking about, but it's also the use of ADMT to make a significant decision concerning a consumer. So, if a business is engaging in all of these activities, then the CPPA will require a risk assessment. And the risk assessment will include information like the purpose for processing the personal information, the categories of personal information to be processed. What is the plan method for collecting the personal information? How many consumers will be impacted? What disclosures are to be made? So, there's quite a list of requirements that businesses must consider when proceeding with these risk assessments. The one interesting thing also is that for the risk assessment, you don't have to provide the risk assessment to the CPPA. You just have to provide them with a certification that you conducted the risk assessment. But they can ask for it upon request. So, companies really need to be kind of ahead of the game with how they're proceeding with these risk assessments, especially when there's use of ADMT, because it's something that the CPPA may ask for and may start to examine more closely to determine compliance. |
| Jerry: | Well, we've mentioned this a little bit before, but let's talk about the timeline for compliance with these new regulations, with the whole spectrum of the regulation that became final on the 23rd of September. |
| Sherry: | Absolutely. So, the timeline for compliance is staggered. With respect to risk assessments, businesses must start conducting risk assessments beginning in 2026, beginning next year. But they won't be required to submit it to the CPPA until 2028. And so that means that companies should start in 2026 preparing these risk assessments even as a trial basis. The ADMT requirements, those begin to comply in January of 2027. So, we have a little over a year to consider those regulations and determine how those regulations may impact businesses. And then the cybersecurity audits and audit reports, those are on a rolling basis and they're staggered based on the business's annual gross revenue. So ,if a business makes over $100 million, their compliance begins April 1st, 2028. If a business makes less than $50 million, then their compliance begins April 1st, 2030. So, the CPPA appears to have taken into consideration the size of the business when determining what the timelines are for compliance. |
| Jerry: | Thank you, Sherry. That gives our listeners a sense of what lies ahead. But I want to revert to a discussion with Shannon. You know, we've had prior discussion about the EU AI Act and risk assessments under that act. We are aware of the fact that the administration issued its American AI Action Plan, which called for not overcomplicating the regulatory process so that America could be the dominant player in AI. There was a proposal, actually, in the Big, Beautiful Bill that there be a restriction on state regulation of AI, and that was dropped before the final text of the bill went to the president for signature. But yet there is this admonition from the American AI Action Plan that states should be careful, and they may take other actions if it becomes too complicated. And then, you know, we do have California, the largest state, taking this action. There are other states as well. When you're advising clients, how do you describe the landscape here and where do you think we're headed? |
| Shannon: | Yeah, it's a really good question and one that we're often discussing with our clients. So we talk about the different buckets of law that companies have to pay attention to and the fragmentation of the legislative landscape. So you have the federal laws like the FTC Act, for example, and other laws. federal legislation that the regulators have been very clear applies to AI and that those regulators will regulate AI as it falls within their jurisdictional authority. So, federal laws. You have the privacy laws that talk about automated decision-making technology. So, they're now, I think, at last count, 20 state privacy laws. And then you have this emerging class of state AI laws, AI-specific laws, of which now I think there are over 160. So those laws are being passed very, very quickly. Over 1,000 have been proposed since the beginning of the year. Then you have guidance documents from federal agencies. You have the guidance from the EEOC, the CFPB had issued guidance on AI, the EEOC. Then you have the federal guidance, the executive orders on AI. And then you have enforcement actions. So, coming out of the state AGs and FTC. So that's a body of law that companies need to pay attention to as well. So where is all of this going? That's a great question. I feel like something's got to give in some ways. I, you know, I'd said this, but then I said that with privacy laws, when there started to be more than five or six privacy laws that someone would do something about it, and there would be a federal privacy law. And here we are years later with now 20 different laws. So, I think it's going to continue to be a subject of discussion and there's going to, at least in the near term, continue to be a proliferation of state AI laws and guidance to help businesses sort of muddle through this whole landscape. |
| Sherry: | It sounds like the privacy regulations, the AI regulations, they're kind of following the model that financial services followed, where every financial institution that is doing business in a specific state has to have a license unless an exemption applies. And so now you have companies obtaining 50 plus licenses to do business in the U.S. And I just anticipate that that will occur in the privacy, in the AI, in the cybersecurity space. And I think that might be detrimental in the AI space because it might hold us back with respect to improving the technology, building onto that technology and so forth. So, I think the next couple of years are going to be really interesting. |
| Jerry: | Agreed. It was true in the case of privacy laws that while they didn't slavishly follow California, California did sort of set a standard and others followed. Do you think that will be true with respect to ADMT? |
| Shannon: | That's a really good question. I mean, I think where California goes, others certainly have followed. And the state privacy laws in and of themselves copy-paste some of the provisions from the CCPA. So, I think it's likely that the ADMT regulations will form a part of emerging state guidance in law. |
| Sherry: | With respect to privacy, we saw privacy laws either follow the CCPA model or the GDPR model. I wonder if that will be the same with the ADMT where these states are looking kind of outside the U.S. for guidance on how to proceed. |
| Jerry: | There's no question that the EU AI Act has not been without its controversy within the EU. And the concern there again is the same one, I guess, expressed by the American AI Action Plan. We don't want to hold back the advance of AI in our jurisdiction. We want companies to be creative and forward-looking. And that's the tension we face. If we put too many regulations in place, we restrain the advance, but there's also the appropriate view that we want to make sure we protect individual people from adverse impact. And whether we're going to be able to, we certainly didn't see it happen in the privacy area. There were feints, and people thought we didn't. In fact, Shannon, we did an episode with you on the proposal when it looked like there was going to be a motion on the federal privacy front, but it didn't happen. And so... It'll be very interesting to see how this advances. And I wish we could give our listeners the answer, but at least the question has been explored. We are out of time, I'm afraid. I hope this episode has been helpful to our listeners in understanding what's happening in California and what lies ahead. And I want to thank you, Shannon, for joining us and Sherry for answering these tough questions. |
| Shannon: | Thanks for having me. |
| Sherry: | Thank you. |
Please do not include any confidential, secret or otherwise sensitive information concerning any potential or actual legal matter in this e-mail message. Unsolicited e-mails do not create an attorney-client relationship and confidential or secret information included in such e-mails cannot be protected from disclosure. Orrick does not have a duty or a legal obligation to keep confidential any information that you provide to us. Also, please note that our attorneys do not seek to practice law in any jurisdiction in which they are not properly authorized to do so.
By clicking "OK" below, you understand and agree that Orrick will have no duty to keep confidential any information you provide.