New York Steps Up with First-of-Its-Kind AI Safety Legislation
TLDR:
New York becomes the first state to pass a comprehensive AI safety bill targeting frontier AI models.
The RAISE Act sets mandatory transparency and reporting standards for major AI developers.
Lawmakers say the bill balances safety with innovation, shielding smaller startups from undue regulation.
Industry backlash has been swift, but officials argue New York’s economic weight makes compliance unavoidable.
New York State lawmakers have passed the RAISE Act, a groundbreaking bill aimed at regulating the safety of advanced AI systems.
The legislation, passed on Thursday targets frontier AI models developed by major companies such as OpenAI, Google, and Anthropic. These models, trained on massive computing power and capable of far-reaching consequences, have raised serious safety concerns among researchers and policymakers alike.
The RAISE Act, short for Responsible Artificial Intelligence in Societal Engagement, seeks to preempt catastrophic risks by requiring developers of powerful AI systems to submit detailed safety and security assessments. These include reports on potential misuse, technical vulnerabilities, and incidents involving unsafe behavior or data breaches. Companies found to be noncompliant could face civil penalties of up to $30 million.
Inside baseball policy thread: Last night, NY passed the RAISE act, which would establish some transparency requirements for frontier models. We @anthropicai haven’t taken a position on this bill. But I thought it’d be helpful to give some more context:
— Jack Clark (@jackclarkSF) June 13, 2025
Guardrails Without Crushing Innovation
New York State Senator Andrew Gounardes, one of the bill’s co-sponsors, said the urgency to legislate AI safety has never been greater.
“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” he explained, citing warnings from prominent AI experts such as Geoffrey Hinton and Yoshua Bengio, who have supported the measure.
What makes the RAISE Act distinct from previous efforts, such as California’s vetoed SB 1047, is its targeted approach. The bill is crafted to apply only to companies whose AI models have been trained with more than $100 million in computing resources and are made available to residents of New York. Gounardes emphasized that smaller startups and academic institutions are not the focus, pushing back on criticism that the bill stifles innovation.
Backlash from Silicon Valley
Despite its narrow scope, the RAISE Act has faced stiff resistance from tech investors and companies. Anjney Midha, a general partner at Andreessen Horowitz, dismissed the legislation as “yet another stupid, stupid state level AI bill.” Critics argue that imposing state-level compliance burdens could prompt major firms to withhold their most advanced AI products from New York residents altogether.
However, Assemblymember Alex Bores, another co-sponsor, rejected that possibility, saying the economic incentive to stay in New York far outweighs the hassle.
“I don’t want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reason for [AI companies] to not make their models available in New York,” he said.
Setting a National Precedent?
If signed into law by Governor Kathy Hochul, the RAISE Act would mark the first time the U.S. has implemented legally binding safety and transparency requirements specifically tailored for frontier AI systems. That would place New York ahead of federal efforts and other states in the race to establish ethical standards for AI deployment.
At the same time, New York is also exploring parallel legislation that addresses algorithmic discrimination and consumer protection. The proposed NY AI Act and the Protection Act aim to regulate the use of AI in consequential decisions like employment and credit, demanding audits, opt-out rights, and human oversight.
The post New York Steps Up with First-of-Its-Kind AI Safety Legislation appeared first on CoinCentral.
Filed under: News - @ June 15, 2025 5:17 am