All the news about SB 1047, California’s bid to govern AI

1 month ago

California is known for taking on regulatory issues like data privacy and social media content moderation, and its latest target is AI. The state’s legislature recently passed SB 1047, one of the US’s first and most significant frameworks for governing artificial intelligence systems. The bill contains sweeping AI safety requirements aimed at the potentially existential risks of “foundation” AI models trained on vast swaths of human-made and synthetic data.

SB 1047 has proven controversial, drawing criticism from the likes of Mozilla (which expressed concern it would harm the open-source community); OpenAI (which warned it could hamper the AI industry’s growth); and Rep. Nancy Pelosi (D-CA), who called it “well-intentioned but ill informed.” But particularly after an amendment that softened some provisions, it garnered support from other parties. Anthropic concluded that the bill’s “benefits likely outweigh its costs,” while former Google AI lead Geoffrey Hinton called it “a sensible approach” for balancing risks and advancement of the technology.

Governor Gavin Newsom hasn’t indicated whether he will sign SB 1047, so the bill’s future is hazy. But the biggest foundation model companies are based in California, and its passage would affect them all.

  • Adi Robertson

    SB 1047 has passed the California Senate.

    The Senate was widely expected to pass the bill, which has now officially cleared every hurdle except a final signature from Governor Gavin Newsom. Newsom has until the end of September to make his call.


  • Wes Davis

    California legislature passes sweeping AI safety bill

    Digital photo collage of a judge with gavel whose hands has too many fingers.

    Illustration by Cath Virginia / The Verge | Photos from Getty Images

    The California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), one of the first significant regulations of artificial intelligence in the US.

    The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”

    Read Article >

  • Emilia David

    California senator files bill prohibiting agencies from working with unethical AI companies

    An image showing a repeating pattern of brain illustrations

    Illustration: Alex Castro / The Verge

    A second California state senator has introduced bills meant to regulate AI systems, particularly those used by state agencies.

    Senator Steve Padilla, a Democrat, introduced Senate Bills 892 and 893, establishing a public AI resource and creating a “safe and ethical framework” around AI for the state. Senate Bill 892 will require California’s Department of Technology to develop safety, privacy, and non-discrimination standards around services using AI. It also prohibits the state of California from contracting any AI services “unless the provider of the services meets the established standards.” 

    Read Article >

  • Emilia David

    California lawmaker proposes regulation of AI models

    AI brains in a network

    Illustration by Alex Castro / The Verge

    A California lawmaker will file a bill seeking to make generative AI models more transparent and start a discussion in the state on how to regulate the technology.

    Time reports that California Senator Scott Wiener (D) has drafted a bill requiring “frontier” model systems, usually classified as large language models, to meet transparency standards when they reach above a certain quantity of computing power. Wiener’s bill will also propose security measures so AI systems don’t “fall into the hands of foreign states” and tries to establish a state research center on AI outside of Big Tech. 

    Read Article >

Read Entire Article