3 minute read | September.24.2025
In this month's update:
|
AI Regulatory Landscape: Three Things to Know
AI Activity We're Keeping an Eye On |
New U.S. State Laws |
Last month Illinois enacted a law that limits the use of AI tools without a licensed professional to “administrative” and “supplementary support” services, which expressly do not include therapeutic communications with clients. Read our analysis.
Six months ahead of its planned effective date, the governor announced a special legislative session to address “the fiscal and implementation impact” on consumers, businesses and the state and local government. While lawmakers could not agree on revisions to the law’s obligations during the August special session, they did amend the law’s implementation date, delaying it to June 30, 2026. Here are five things to know about the law.
When a downstream actor modifies or fine-tunes a general-purpose AI model (GPAIM), who bears the burden of complying with the EU AI Act — the original provider, the modifier or both? The answer is not always clear, yet it has significant implications for risk allocation across the AI value chain. Read our analysis.
Want to view these laws by state or effective date? Our U.S. AI law tracker now features advanced search and filtering capabilities. Filter all 160+ state AI laws by state, effective date, or AI scope (healthcare, deepfakes, government use, etc.). Bookmark this page: All States
Below are the state AI laws that have been newly enacted or substantially updated:
AI Healthcare
|
Comprehensive AI |
California has passed a slate of AI bills. AB 1064, the “Leading Ethical AI Development (LEAD) for Kids Act” would “prohibit making a companion chatbot available to children if it is foreseeably capable of specified harmful behaviors, including encouraging the child to engage in self harm.” Additionally, SB 243, the “Companion Chatbot Safety Act,” which we covered in last month’s update, has now passed. Meanwhile, AB 489 would give regulatory authorities the power to take enforcement action against AI systems that falsely claim to be licensed healthcare professionals. Finally, AB 316 would prevent defendants from avoiding liability by claiming their AI acted autonomously, while still allowing other standard legal defenses.
The Federal Trade Commission has launched an inquiry into how providers of consumer-facing AI chatbots measure the potential negative effects of “human-like” communication with children and teens. Per the press release, the agency has issued orders to seven companies so far.
Want to receive these updates in your inbox?