Addressing Artificial Intelligence in Your Privacy Notice: 4 Recommendations for Companies to Consider


5 minute read | April.25.2024

As companies increasingly leverage AI in their operations, the obligations and expectations for AI-related consumer disclosures continue to evolve. As a result, companies seeking to use consumer-oriented AI face uncertainty about how to integrate AI disclosures into privacy notices. 

Here are four recommendations for companies to consider to mitigate risk in the face of these evolving legal and regulatory standards—along with an overview  of the key legal obligations and regulatory expectations for AI-related privacy notice disclosures in the United States. 

  1. Assess whether the company’s use of AI results in a new form of processing personal data that will require new or amended privacy notice disclosures. 

Nearly every federal and state privacy law imposing privacy notice obligations requires companies to accurately and comprehensively describe how the company collects, uses, shares and otherwise processes personal data. To the extent AI is used to process personal data, most privacy laws will require the privacy notice to describe the processing the same way a company would describe any other form of technological processing of personal data. 

As a result, companies should review privacy notice disclosures to assess whether the use of AI represents a different way of performing processing activities already described in the privacy notice or if using AI results in new forms of processing that will require new or amended privacy notice disclosures. 

Unless the use of AI triggers a specific disclosure requirement outlined below, the company can describe the processing without necessarily disclosing that AI is processing the data. However, including a disclosure that certain processing may be performed in an automated fashion can help reduce the risk of a claim that the company tried to hide their use of AI.   

  1. Determine whether existing privacy notice disclosures sufficiently inform consumers that the company may use their personal data to train AI models. 

Regulators are increasingly scrutinizing the use of personal data to train AI models. They are particularly focused on whether consumers would understand, based on disclosures, that the company may use their personal data for these purposes. 

Some argue that a sufficiently robust “service or technology improvement” disclosure in the privacy notice should put the consumer on notice that the company may use their personal data for AI development. It is becoming increasingly beneficial, however, to add more precise language about how the company uses personal data to train AI models. 

As a result, companies should review whether existing privacy notice disclosures sufficiently notify consumers that their personal data may be used to train AI models. If not, they should consider bolstering disclosures with additional specificity. 

  1. Inform consumers in certain states about automated decision-making—and how they can opt out. 

An increasing number of states provide consumers the right to opt out of the automated processing of personal data to evaluate, analyze or predict personal aspects concerning their economic situation, health, demographic characteristics, personal preferences, interests, reliability, behavior, location or movements in furtherance of decisions that produce legal or similarly significant effects, including decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, healthcare services or access to essential goods or services such as food and water. 

Once in effect, comprehensive state privacy laws in California (new regulations forthcoming), Colorado (in effect), Connecticut (in effect), Delaware (January 2025), Florida (July 2024), Indiana (January 2026), Montana (October 2024), New Jersey (January 2025), Oregon (July 2024), Tennessee (July 2025), Texas (July 2024) and Virginia (in effect) must inform consumers about this processing and their ability to exercise the right to opt out. Additional states are likely to incorporate this requirement as they finalize their own comprehensive state privacy laws. 

Some states require even more substantial disclosures. For example, Colorado requires: 

  • A description of decisions made automatically.
  • The categories of personal data processed in furtherance of the decisions.
  • A non-technical, plain language explanation of the logic used in the profiling process.
  • A non-technical, plain language explanation of how profiling is used in decision-making, including the role humans play (if any).
  • Whether the system has been evaluated for accuracy, fairness or bias, including the impact of using sensitive data and the outcome of any such evaluation.
  • The benefits and potential consequences of the decision being made. 

Companies using AI in automated decision-making should analyze state privacy laws to determine whether they require companies to include automated decision-making disclosures in privacy notices. If disclosures are required, consider working with legal counsel to balance addressing disclosure requirements with potential risks posed by exaggerated promises relating to accuracy, fairness or bias and over disclosures relating to proprietary technical details of processing. 

  1. Ascertain whether privacy notice updates are substantive or material enough to trigger heightened notice or disclosure requirements. 

Any material updates a company makes to its privacy notice can be applied:

  • To new personal data collected from new customers immediately once the notice is effective.
  • To new personal data from existing customers within a reasonable period of time after the company has notified customers of the changes.
  • To personal data previously collected by the company where the consumer is provided notice of the changes and consents to the updated personal data processing. 

Regulators have cautioned companies against making surreptitious changes to privacy notices to address new AI strategies. For example, the Federal Trade Commission recently warned companies that quietly updating their privacy notices to adopt more permissive data practices to facilitate AI training could be an unfair or deceptive practice. 

As a result, companies should assess whether privacy notice updates are immaterial clarifications that do not require separate notice and/or consent to consumers, or substantive/material revisions that may trigger heightened notice and/or consent requirements. We recommend working with legal counsel to perform this assessment. 

Certain U.S. sectors or technologies may be subject to additional disclosure requirements, such as financial services or AI-powered biometric technologies. In addition, laws outside the United States may impose different or additional disclosure requirements. 

Have questions about using AI consistent with U.S. or international privacy law and privacy notice disclosures? Reach out to Shannon Yavorsky, Nick Farnsworth and Cosmas Robless or other members of the Orrick team.