Massachusetts Attorney General Shares Artificial Intelligence Guidance: What Businesses Need to Know


5 minute read | April.19.2024

For the first time, a state attorney general has provided guidance to businesses on using artificial intelligence (AI) in that state.

An advisory opinion from Massachusetts Attorney General Andrea Campbell highlights the important role state AGs will play in enforcing laws related to AI. State AGs have broad powers to bring enforcement actions under consumer protection statutes, or “unfair or deceptive acts or practices,” loosely defined legal terms.

As AI technology advances rapidly, state attorneys general are paying close attention to the emerging legal issues surrounding the use of AI in their states. Meanwhile, federal and state governments are beginning to enact regulations. However, given how rapidly AI technology is advancing, it may be difficult for the laws and regulations to keep pace. 

Here are three things businesses need to know about the use of AI in Massachusetts:

1. Attorney General Campbell’s advisory opinion seeks to provide guidance to developers, suppliers and users of AI and algorithmic decision-making systems.

The guidance addresses their obligations under:

  • The Massachusetts Consumer Protection Act and related regulations.
  • The Massachusetts Anti-Discrimination Law.
  • The Data Security Law and implementing regulations.

2. Massachusetts law characterizes a variety of practices as “unfair or deceptive.”

AI developers, suppliers and users could violate state law by:

  • Falsely advertising the quality, value or usability of AI systems.
    • An example of false advertising is where a supplier claims that an AI system has functionality that it does not possess.
  • Supplying an AI system that is defective, unusable or impractical for the purpose advertised.
    • Suppliers have an obligation to ensure that an AI system performs as intended. A 1999 court ruling held that the failure to meet fundamental performance standards is particularly “unfair” or “deceptive” where “harmful or unexpected risks or dangers inherent in the product, or latent performance inadequacies, cannot be detected by the average user or cannot be avoided by adequate disclosures or warnings.”
  • Misrepresenting the reliability, manner of performance, safety or condition of an AI system.
    • Examples of misrepresentation include claims or representations that an AI system is fully automated when its functions are performed in whole or in part by humans, as well as untested and unverified claims that an AI system performs functions with equal accuracy to a human, is more capable than a human at performing a given function, is superior to non-AI products, is free from bias, is not susceptible to malicious use by a bad actor, or is compliant with state and federal law.
  • Offering for sale or use an AI system in breach of warranty in that the system is not fit for the ordinary purposes for which such systems are used, or that is unfit for the specific purpose for which it is sold where the supplier knows of such purpose.
    • For example, offering for sale or use an AI system that is not robust enough to perform appropriately in a real-world environment as compared to a testing environment is unfair and deceptive.
  • Misrepresenting audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning or chatbots used to engage in fraud.
  • Failing to comply with Massachusetts “statutes, rules, regulations or laws, meant for the protection of the public’s health, safety or welfare.”

3. A variety of federal and state laws apply to businesses that use AI in Massachusetts.

  • AI suppliers could potentially violate state law if an AI system is sold or used in a manner that violates federal consumer protection statutes, including the Federal Trade Commission Act.
    • The Federal Trade Commission has taken the position that deceptive or misleading claims about the capabilities of an AI system, and the sale or use of AI systems that cause harm to consumers, violate the Federal Trade Commission Act.
  • AI systems must also comply with the Commonwealth’s Standards for the Protection of Personal Information of Residents of the Commonwealth.
    • This means AI developers, suppliers and users must take steps to safeguard personal information used by those systems – and that they must comply with breach notification requirements.
  • The Commonwealth’s Anti-Discrimination Law prohibits developers, suppliers and users of AI systems from deploying technology that discriminates against residents based on a legally protected characteristic.
  • The advisory provides that the attorney general is empowered to enforce federal consumer protection, anti-discrimination and other laws applicable to AI.
    • For example, AI models are subject to the adverse action notification requirements under the federal Equal Credit and Opportunity Act, the primary federal law that prohibits discrimination in credit.
    • This means covered creditors must provide accurate and specific reasons to consumers indicating why their loan applications were denied, including in circumstances where the creditor uses AI models.

What’s Next in AI Guidance and Regulation?

As AI technology rapidly changes, so too will the regulatory landscape. While Congress and state legislatures slowly enact AI regulations, state AGs already have broad statutory authority to enforce AI under their states’ consumer protection statutes.

The Massachusetts AG is the first to provide comprehensive guidance on the potential legal pitfalls for businesses using AI. Expect other state AGs to follow suit with similar guidance.  If you have questions about this update, please reach out to the authors (Andy Cook, Rob McKenna, Brian Moran, Adam Braun) or another member of Orrick’s AI & Machine Learning practice.