Elon Musk has come out in support of California bill SB 1047, which would introduce new safety and accountability mechanisms for large AI systems. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk writes on X. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”Musk joins Anthropic CEO Dario Amodei in support of the bill. Amodei says it is necessary to protect the public and increase transparency in the industry. Others major players oppose it, including Google, Meta, OpenAI, and some members of Congress, including Nancy Pelosi. They say it could stifle innovation and slow progress.SB 1047 heads to a final vote in the state Assembly this week. If it passes, it would advance to California Governor Gavin Newsom to sign or veto by Sept. 30. California state lawmakers introduced 65 bills related to AI this legislative season, touching on a range of topics from algorithmic bias to protecting intellectual property, Reuters reports. Many are already dead, while SB 1047 is still standing. The bill would require companies spending at least $100 million to develop powerful AI models to test them before release, and introduces a kill switch to shut them down if they spiral out of control after launch. These are “reasonable and implementable” measures, says California State Senator Scott Wiener, who originally introduced the bill. He says OpenAI has already committed to performing such safety evaluations. Meta’s chief AI scientist Yann LeCun says the bill would have “apocalyptic consequences on the AI ecosystem” because it would regulate the research and development process. “The sad thing is that the regulation of AI R&D is predicated on the illusion of ‘existential risks’ pushed by a handful of delusional think-tanks, and dismissed as nonsense (or at least widely premature) by the vast majority of researchers and engineers in academia, startups, larger companies, and investment firms,” LeCun says.
Recommended by Our Editors
Though Musk supports SB 1047, he has a somewhat mixed track record on AI safety and accountability. On the one hand, he joined Apple CEO Steve Wozniak in signing a 2023 letter calling for an industry-wide pause on development until the societal risks are better understood. He has also sued OpenAI twice this year. On the other hand, his Grok chatbot recently drew criticism for creating false, realistic images of Donald Trump and Vice President Kamala Harris, which could spread misinformation ahead of the US presidential election. Tesla’s AI-driven self-driving systems have also been embroiled in multiple lawsuits regarding lethal crashes, which the EV maker denies its technology contributed to.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
About Emily Dreibelbis
Senior Reporter
Prior to starting at PCMag, I worked in Big Tech on the West Coast for six years. From that time, I got an up-close view of how software engineering teams work, how good products are launched, and the way business strategies shift over time. After I’d had my fill, I changed course and enrolled in a master’s program for journalism at Northwestern University in Chicago. I’m now a reporter with a focus on electric vehicles and artificial intelligence.
Read Emily’s full bio
Read the latest from Emily Dreibelbis