- Law4Startups
- Posts
- ⚖️ New York AI Regulation Bill
⚖️ New York AI Regulation Bill
New York Passes Landmark AI Safety Legislation
New York State lawmakers have passed the RAISE Act, a pioneering bill aimed at regulating “frontier” AI models developed by major labs like OpenAI, Google, and Anthropic. If signed into law by Governor Kathy Hochul, the legislation would establish the first legally binding transparency standards for advanced AI systems in the U.S. The law requires companies training models with over $100 million in compute to submit safety and incident reports, with civil penalties of up to $30 million for noncompliance. Backed by AI luminaries like Geoffrey Hinton and Yoshua Bengio, the bill is a direct response to the accelerating risks posed by highly capable AI — especially in light of weak federal oversight.
Want unlimited AI, ASAP?
Get up to 6 months of Plus plan + unlimited AI free!
Launch and scale your startup faster with Notion.
Visit the Notion for Startups page to get the offer.
A Middle Ground Between Innovation and Accountability
Unlike California’s SB 1047, which was vetoed amid concerns about overregulation, the RAISE Act was designed to narrowly focus on the largest and most powerful AI systems while avoiding burdensome restrictions on startups and researchers. It steers clear of more extreme measures like mandated “kill switches” and does not hold post-training developers liable for catastrophic harms. However, critics — including prominent VCs like Andreessen Horowitz — argue the law still risks driving innovation out of state. Supporters counter that New York’s massive economy and light regulatory touch make that outcome unlikely, and that minimal transparency is a small ask given the scale of the risk frontier AI could pose.
Most Startups Are Exempt, But Pay Attention Anyway
For most startups, the RAISE Act won’t impose direct obligations — the $100M compute threshold clearly targets only a handful of global firms. However, founders in the AI space should still keep a close eye on this development: it sets a precedent that other states and federal agencies may follow. If your startup relies on frontier models or integrates them into your product, you may soon face questions from investors, partners, or regulators about how you're managing AI risk and disclosure. This is also a signal that AI safety, once a fringe concern, is now part of the legislative mainstream. Building with responsible guardrails today could position your company as a trusted player tomorrow.
In addition to our newsletter we offer 60+ free legal templates for companies in the UK, Canada and the US. These include employment contracts, investment agreements and more
Newsletter supported by:
Don’t settle for an average profile photo
Look like someone worth hiring. Upload a few selfies and let our InstaHeadshots AI generate stunning, professional headshots—fast, easy, and affordably.