8 minute read | January.26.2026
State Attorneys General (AGs) are expected to remain active in 2026, with an emphasis on emerging issues surrounding artificial intelligence, consumer protection and civil rights.
New York Governor Kathy Hochul signed groundbreaking legislation requiring frameworks for “AI frontier models,” part of the Responsible AI Safety and Education (RAISE) Act. Under the law, AI developers will be required to establish and publish safety frameworks for advanced AI systems and report incidents of critical harm to the state within 72 hours of discovery. The measure also creates a new oversight office within the New York Department of Financial Services tasked with annual reporting on AI safety and transparency.
Additionally, the law grants the New York Attorney General authority to bring civil actions against AI frontier developers for failure to submit required reporting or making false statements, with penalties ranging from $1 million for the first violation and up to $3 million for subsequent violations.
A bipartisan coalition of 24 state Attorneys General, sent a comment letter to the Federal Communications Commission (FCC) opposing an inquiry that could lay the groundwork for federal preemption of state AI regulations. In the letter, the coalition argued that the FCC lacks authority to override state laws addressing AI and that any such preemptive action would limit states’ ability to protect consumers, especially in areas like privacy. Specifically, the state AGs argue that the FCC’s proposal would impinge the function of core state responsibilities protected by the Tenth Amendment.
The state AGs further noted that states often serve as first responders to consumer complaints and emerging technology risks, highlighting the need for states to retain flexibility to regulate AI responsibly.
In November 2025, the Attorney General Alliance launched a bipartisan Artificial Intelligence Task Force working in partnership with major AI developers such as OpenAI and Microsoft. Co-chaired by North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown., the task force aims to address emerging AI-related risks while supporting innovation by establishing an ongoing forum for monitoring and responding to AI developments, identifying new AI challenges and developing baseline safeguards—particularly to protect children.
During the inaugural meeting on January 14, 2026, AG Jackson described AI as “the issue of our time,” highlighting both its significant benefits, such as medical advancements, and its risks, including deepfakes and robocalls. He emphasized the task force’s focus on consumer protection, business innovation and public safety.
AG Brown echoed the need to balance innovation with risk mitigation, noting that while AI presents high rewards, it also carries high risks. He stressed that state AGs often have the authority to act more quickly than legislatures and, in some cases, may need to pursue litigation to address harms.
Tania Maestas, Deputy Executive and General Counsel of the AGA, facilitated a discussion among participating states, which raised shared concerns about child safety, deepfake pornography, privacy, security and confidentiality. At the same time, states recognized AI’s potential to positively transform the workforce and create business opportunities, provided it is deployed responsibly and without causing harm.
During a media question-and-answer session, AGs addressed federal efforts to preempt state action on AI, asserting that such moves raise constitutional concerns and undermine states’ authority. Both co-chairs identified specific priorities, including deepfake pornography, chatbots’ impact on children, and accountability of large technology companies for harmful content. They also discussed AI adoption within their offices, noting growing use and evolving policies, and expressed serious concern about candidate deepfakes and their potential impact on elections.
The task force plans to continue convening future meetings to track emerging AI issues and coordinate timely, state-level responses.
Texas Attorney General Ken Paxton secured a $1.25 million settlement with Hyatt Hotels for violating Texas consumer protection laws by advertising room rates without making unavoidable mandatory fees clear to consumers. As part of the settlement, Hyatt must publicly disclose all fees upfront so that consumers can better compare pricing. Paxton emphasized that misleading pricing practices harm Texas travelers and unfairly advantages companies that hide fees.
New York Governor Hochul signed into law Attorney General Letitia James’ Fostering Affordability and Integrity through Reasonable (FAIR) Businesses Practices Act, the first major update to the state’s consumer protection law in 45 years.
The new law amends the existing deceptive acts and practices statute by adding “unfair” and “or abusive” to the current statute, which only provides for “deceptive acts and practices.”
The new law defines “unfair” as follows:
An act or practice is unfair when it causes or is likely to cause substantial injury which is not reasonably avoidable and is not outweighed by countervailing benefits to consumers or to competition.
The statute defines an act or practice as “abusive” when:
The law clarifies that only the Attorney General may bring claims for unfair or abusive conduct, with private actions remaining limited to deceptive practices. The FAIR Business Act further eliminates the constraints imposed by state courts which previously limited the Attorney General’s power to enforce the statute to acts that are “consumer-oriented” or that have an impact on the public at large. The new law stipulates that the Attorney General has a responsibility to protect businesses and non-profits, as well as individual consumers.
Attorney General James said the law will strengthen enforcement against hidden fees, unfair billing, and abusive practices by lenders, servicers and health care companies, particularly as federal consumer protection enforcement faces uncertainty, and signals increased scrutiny for businesses operating in New York.
As the Trump administration moves to rollback many regulations, many state AGs are looking to fill in what they perceive as the regulatory void.
A coalition of 21 Democratic state AGs and the District of Columbia filed a complaint challenging the Trump administration’s defunding of the Consumer Financial Protection Bureau (CFPB). The complaint alleges that the CFPB’s shutdown is unlawful and could cause “irreparable harm” to consumer financial protections. The state AGs contend that CFPB defunding undermines enforcement of fair lending, transparency in financial products, and protections against unfair practices.
The complaint alleges the actions to defund the CFPB violate the Administrative Procedure Act, claiming that no constitutional provision or statute authorizes the CFPB to refrain from fulfilling its statutory duties.
In a separate lawsuit brought by the National Treasury Employees Union and other private parties, a federal district court on December 30, 2025, issued an opinion and order determining that it was unlawful for the Trump administration to allow funding for the CFPB to completely lapse while a final outcome in the case was pending. As a result of this decision, CFPB Acting Director Russell Voight requested $145 million from the Federal Reserve to maintain enough funding for the CFPB to operate in the interim.
The New Jersey Division on Civil Rights, under Attorney General Matthew Platkin, adopted new rules interpreting New Jersey’s Law Against Discrimination (LAD).
The regulations codify and clarify New Jersey’s longstanding approach to disparate impact discrimination under LAD, formally establishing how seemingly neutral practices or policies are assessed when they disproportionately harm members of protected classes—across employment, housing, public accommodations, financial lending and contracting contexts.
Under the rules, a disparate impact violation occurs when a facially neutral practice results in a significantly adverse effect on a protected group unless the covered entity demonstrates that the practice is necessary to achieve a substantial, legitimate, nondiscriminatory interest and that there is no less discriminatory alternative. The framework generally follows a burden-shifting approach rooted in existing case law, with the regulations providing clarity on legal standards, burdens of proof, and illustrative examples to help both individuals and regulated entities understand compliance obligations.
Importantly, the rules explicitly recognize artificial intelligence and automated decision-making tools as potential sources of disparate impact liability, signaling heightened enforcement attention on AI-driven hiring and screening systems that inadvertently produce biased outcomes.
Other practices that may trigger disparate impact concerns include policies related to criminal history, credit or income standards, language or citizenship requirements, and workplace dress or physical standards that disproportionately affect protected groups. Although the regulations do not create new liability beyond existing law, they serve to solidify New Jersey’s civil rights protections at a time when federal disparate impact enforcement has been scaled back, ensuring that discriminatory effects – regardless of intent – remain actionable under state law.
The newly elected President of the National Association of Attorneys General, Connecticut AG William Tong, announced his presidential initiative, “Driving Down Costs for American Families,” signaling that affordability is another key issue for state AGs.
In addition, the Democratic Attorneys General Association has formed a Consumer Protection and Affordability Working Group, led by Rohit Chopra, former director of the Consumer Financial Protection Bureau and former commissioner on the Federal Trade Commission.