As of April 2026, a growing number of states have moved to pass laws to regulate chatbots — particularly conversational or so-called “companion” chatbots designed to simulate platonic, intimate or romantic relationship with users — to respond to concerns about the sufficiency of disclosures, warnings and protocols regarding potential mental health and other harms from certain interactions.
Companies who have in-licensed chatbots for customer, user, patient or employee interactions may have limited visibility and control over the full capabilities and design of the product, adding an additional layer of risk. These new state companion chatbot laws are transforming deployments from a UX decision into a regulatory and litigation risk, with statutory damages, potential class actions and heightened scrutiny from state attorneys general.
State Chatbot Laws Now in Effect or Newly Enacted
Recent adoptions focus on transparency and safety protocols — including for youth safety:
- California SB 243 (Companion Chatbot Law): Requires operators to disclose non-human status, implement mental health crisis protocols and provide special protections for minors (e.g., blocking sexual content, enforcing periodic breaks). Effective as of January 1, 2026.
- Colorado AI Act (SB 24-205): Mandates “reasonable care” to prevent algorithmic discrimination in high-risk AI systems; major provisions effective June 30, 2026.
- Idaho (SB 1297): Passed in early April, this act closely follows Nebraska’s model for conversational AI safety and transparency. Effective July 1, 2027.
- Nebraska (LB 525): Enacted the Conversational AI Safety Act on April 14, 2026, introducing comprehensive safety and transparency obligations for conversational AI systems. Effective July 1, 2027.
- Oregon (SB 1546): Signed in March 2026, this law regulates "companion chatbots," requiring disclosures of AI involvement, suicide ideation detection and crisis referral interruptions, annual filings, and additional measures for minor use. Notably, SB 1546 establishes a private right of action with statutory damages of $1,000 per violation. Effective January 1, 2027.
- Tennessee (SB 1580): Prohibits AI systems from presenting themselves as licensed mental health professionals, a growing area of concern for policymakers. Effective July 1, 2026.
- Washington (HB 2225 / SB 1546): The Chatbot Disclosure Act (March 2026) requires mandatory non-human disclosures and minor safety protocols for companion chatbots. Effective January 1, 2027.
- New York’s AI Companion Models law (General Business Law Article 47): Effective as of November 2025, mandates that AI companion operators implement safety protocols to detect and address user suicidal ideation or self-harm. It also requires clear, regular disclosures that users are interacting with AI, not a human, and forces referrals to crisis services upon identifying self-harm risks. Effective as of November 5, 2025.
5 Key Regulatory Trends for Chatbot Operators
Private Rights of Action
Unlike earlier broad AI laws that primarily relied on state attorney general enforcement, several state chatbot laws now grant individuals the right to sue providers directly for statutory damages (e.g., Oregon SB 1546, Washington HB 2225/SB 1546). This trend increases litigation risk and may drive more rigorous compliance practices.
Transparency: Non-Human Disclosure
Nearly all new state laws require chatbots to make clear, up-front disclosures that users are interacting with an AI system. These requirements are especially strict when chatbots interact with minors or in contexts where confusion with a human is likely.
Minor Safety Protocols
Laws increasingly require technical safeguards to detect and respond to suicidal ideation or self-harm expressed by users. The protocols typically include referrals to crisis hotlines or escalation to human moderators with additional measures include content filtering (e.g., blocking sexual content) and mandatory interaction breaks for minors.
Professional Licensure Restrictions
A growing number of statutes — alongside federal efforts like the CHATBOT Act — prohibit chatbots from impersonating licensed professionals such as doctors, lawyers or mental health providers. This provision directly addresses risks of consumer deception and unlicensed practice.
Disclosure of Data Sources
Recent California laws address disclosure of training data sources and provenance labeling for AI-generated outputs, raising potential issues for model developers and anyone deploying models in a system with retrieval-augmented-generation.
What Should Companies Do?
- Periodically review AI-enabled tools against applicable state definitions to identify any regulatory risk.
- Update user interfaces to ensure clear non-human disclosures.
- Review and test minor safety protocols (e.g., crisis response features that may require real-time intent classification) with internal development team or third-party vendor.
- Audit chatbot content and scripts to ensure adequacy of disclosure language and to avoid any suggestion of professional licensure unless appropriately authorized.
- Review data practices and labeling in light of new transparency obligations.
- Review vendor agreements to bring new compliance obligations within scope of work, as well as to review allocation of liability and indemnification provisions.
- Review insurance coverage with respect to chatbot-related claims.
Bottom Line:
With chatbot law rapidly evolving at the state level — and significant divergence in specific requirements — it is critical for organizations to stay abreast of new obligations around transparency, minor protection, professional impersonation and user redress. Proactive compliance is essential to minimize enforcement and litigation risks in this fast-changing environment.
For more information on the chatbot laws, contact Orrick’s team: Meg Hennessey and Caitlin Burke.