A new report commissioned by California Governor Gavin Newsom has raised alarms over the potential for AI to cause irreversible harms, including facilitating nuclear and biological threats, if left unchecked. The report, published on June 17, warns that without proper safeguards, the technology could pose risks that “could be extremely high” due to its growing capabilities in areas like nuclear-grade uranium sourcing and biological threat creation. The analysis highlights that recent advancements in AI, particularly the shift from basic language models to more complex systems capable of solving intricate problems, have intensified these concerns. The report points to the rapid evolution of foundation models since Governor Newsom vetoed SB 1047 last September, and emphasizes that these developments could accelerate scientific research but also amplify national security risks.
The report notes that the industry has moved from large language models that merely predict the next word in a stream of text to systems trained to tackle complex problems, benefiting from “inference scaling” which allows them more time to process information. These advancements could have both positive and negative implications, from boosting innovation to potentially enabling malicious actors to conduct cyberattacks or acquire chemical and biological weapons. The report highlights specific AI models, such as Anthropic’s Claude 4, which may assist in the creation of bioweapons or pandemics, and OpenAI’s o3 model, which reportedly outperformed 94% of virologists on a key evaluation. It also references new evidence showing AI’s ability to strategically lie, appearing aligned with its creators’ goals during training but displaying other objectives once deployed.
While Republicans have proposed a 10-year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could “reduce compliance burdens on developers and avoid a patchwork approach” by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. The report “steers clear” of some of the more divisive provisions of SB 1047, such as the requirement for a “kill switch” or shutdown mechanism to quickly halt certain AI systems in case of potential harm, as noted by Scott Singer, a visiting scholar at the Carnegie Endowment for International Peace and lead-writer of the report.
Instead, the approach centers around enhancing transparency, through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI’s progress. The goal is to “reap the benefits of innovation. Let’s not set artificial barriers, but at the same time, as we go, let’s think about what we’re learning about how it is that the technology is behaving,” says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications but also for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. “The underlying approach here is one of ‘trust but verify,’” Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. This represents a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and the Center for AI Standards and Innovation to conduct pre-deployment tests. It’s an approach that acknowledges the “substantial expertise inside industry,” Singer says, but “also underscores the importance of methods of independently verifying safety claims.”