AI Safety Laws: Anticipating Future Risks - Fei-Fei Li

New Report Calls for Proactive AI Regulation in California
A recently published report from a California-based policy group, co-led by prominent AI researcher Fei-Fei Li, proposes that legislators should contemplate potential AI risks that are currently hypothetical when formulating new regulations.
Background of the Report
This 41-page interim report originates from the Joint California Policy Working Group on AI Frontier Models. The group was established by Governor Gavin Newsom after he vetoed the previously proposed California AI safety bill, SB 1047.
Although Governor Newsom determined that SB 1047 was not adequately focused, he did acknowledge the necessity for a more thorough evaluation of AI risks to better inform future legislative decisions.
Key Arguments and Recommendations
The report, authored by Li, Jennifer Chayes (Dean of UC Berkeley College of Computing), and Mariano-Florentino Cuéllar (President of the Carnegie Endowment for International Peace), advocates for legislation that would mandate greater transparency from leading AI labs, including OpenAI.
Stakeholders representing diverse viewpoints within the industry were involved in reviewing the report prior to its release. This included both strong proponents of AI safety, such as Turing Award recipient Yoshua Bengio, and those who opposed SB 1047, like Databricks co-founder Ion Stoica.
Transparency and Accountability Measures
The report suggests that laws should require AI model developers to make public their safety testing procedures, data sourcing methods, and security protocols.
Furthermore, it calls for enhanced standards for independent evaluations of these metrics and corporate policies, alongside expanded protections for whistleblowers employed by or contracted with AI companies.
Addressing Uncertain Risks
The authors acknowledge that the evidence regarding AI’s potential for misuse – such as facilitating cyberattacks or the creation of biological weapons – remains inconclusive.
However, they emphasize that AI policy must not solely focus on present dangers, but also anticipate potential future consequences that could arise without adequate safeguards.
The report draws a parallel to nuclear weapons, stating that the potential for harm doesn't require an actual detonation to be recognized. The potential consequences of inaction regarding frontier AI are considered extremely high if worst-case scenarios materialize.
A "Trust But Verify" Approach
The report proposes a dual strategy to enhance transparency in AI model development: a "trust but verify" framework.
This involves providing channels for developers and employees to report concerns, such as internal safety testing results, while simultaneously requiring independent verification of testing claims.
Reception and Future Outlook
Although the final report is scheduled for release in June 2025, it has already garnered positive feedback from experts across the AI policy spectrum.
Dean Ball, a research fellow at George Mason University critical of SB 1047, expressed optimism about the report’s potential impact on California’s AI safety regulation via a post on X.
California state senator Scott Wiener, the original sponsor of SB 1047, also welcomed the report, stating that it builds upon the important discussions regarding AI governance initiated in the legislature during 2024.
The report’s recommendations align with elements of both SB 1047 and Wiener’s subsequent bill, SB 53, particularly the requirement for reporting safety test results. It represents a significant step forward for AI safety advocates.





