News

AI legislative updates in Maine and New York

Long exposure of car lights going two directions on a highway at night
Long exposure of car lights going two directions on a highway at night

As legislative sessions wind down across the United States, many states are advancing AI-related bills. We discuss two noteworthy updates below.

Maine Enacts AI Chatbot Disclosure Law

On June 12, 2025, Maine Governor Janet Mills signed into law “An Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence,” which will impose transparency requirements on the use of artificial intelligence (AI) chatbots in trade or commerce. 

The Act covers anyone who uses AI chatbots in trade or commerce, and AI chatbots are defined as software applications, web interfaces, or computer programs that simulate human-like conversation and interaction through textual or aural communications. 

The Act prohibits using an AI chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing they are engaging with a human being, unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being. 

The Act will be enforced by the attorney general under the Maine Unfair Trade Practice Act. 

New York Legislature Passes Responsible AI Safety and Education (RAISE) Act

On June 12, 2025, the New York State legislature approved the “Responsible AI Safety and Education (RAISE) Act” which, if signed by Governor Kathy Hochul, would impose transparency requirements on large developers of frontier AI models. 

The Act applies to large developers of frontier models, meaning persons who have trained at least one frontier model and spent over $100 million dollars in compute costs in training frontier models. The Act defines frontier models as AI models trained using greater than “10º26” computational operators, the compute cost of which must exceeds $100 million dollars, or an AI model produced by applying knowledge distillation to a frontier model, provided the compute cost exceeds $5 million dollars. 

The Act would impose several “transparency requirements” on frontier model training and use: 

  • Before deploying frontier models, large developers must (1) implement a written safety and security protocol as defined by the Act; (2) retain an unredacted copy of the protocol for as long as the model is deployed and five years thereafter; (3) conspicuously publish the protocol with appropriate redactions and transmit a copy to the attorney general and division of homeland security and emergency services who shall have general access to the redacted protocol only to the extent required by law; (4) upon request, record and retain specific tests and test results used in assessments of the frontier model with sufficient detail to allow others to replicate the testing procedure; (6) implement safeguards to prevent unreasonable risk of critical harm. 
  • Large developers may not deploy frontier models if doing so creates an unreasonable risk of “critical harm,” which the Act defines as death or serious injury of 100 or more people or at least $1 billion dollars in damages caused or materially enabled by a large developer’s use, storage, or release of a frontier model through either (1) the creation of a chemical, biological, radiological, or nuclear weapon; or (2) an AI model that acts with no meaningful human intervention and would constitute a crime if committed by a human. 
  • Large developers must annually review the safety and security protocol and disclose to the state attorney general and division of homeland security and emergency services safety incidents affecting the frontier model within 72 hours of obtaining a reasonable belief that an incident has occurred. 
  • Lastly, large developers may not knowingly make false or materially misleading statements or omissions regarding documents produced in response to the Act’s requirements.

If enacted, the Act will take effect 90 days after becoming law and will be enforced by the Attorney General, who may bring actions to recover civil penalties of up to $10 million dollars for first violations and $30 million dollars for subsequent violations, as well as injunctive or declaratory relief.


Authored by Mark Brennan, Ryan Thompson, Sophie Baum, Harsimar Dhanoa, Thomas Veitch, and Erin Mizraki.

View more insights and analysis

Register now to receive personalized content and more!