California group urges lawmakers to anticipate AI risks
A new report from a California policy group, co-led by AI expert Fei-Fei Li, calls for lawmakers to consider future risks related to artificial intelligence (AI) when creating regulations. The 41-page report was released on Tuesday by the Joint California Policy Working Group on Frontier AI Models. This group was formed after Governor Gavin Newsom rejected a previous AI safety bill. The report emphasizes the importance of transparency from AI developers, such as those at OpenAI. It suggests laws that require these companies to publicly share how they conduct safety tests and manage data. The authors, which include leaders from respected institutions, highlight that AI could pose risks that have not yet been seen, including potential security threats. The report argues for stronger regulations around evaluation processes and better protection for whistleblowers. It points out that while there is uncertainty about the specific dangers of AI, proactive measures are essential. They liken anticipating AI risks to predicting the destructive potential of nuclear weapons without having witnessed an actual explosion. The proposed strategy involves both transparency and verification: AI developers should be allowed to report safety concerns while also having their claims independently verified. The report, which is expected to be revised by June 2025, has received positive feedback from experts across the political spectrum. California State Senator Scott Wiener, who previously sponsored the now-rejected bill, praised the report as a significant step forward for AI safety discussions. Overall, the findings seem to support the need for more comprehensive regulations in the evolving landscape of AI technology.