Over the last year, the rapid advancement of artificial intelligence has captivated the world’s attention, sparking both excitement and concern. As AI systems like OpenAI’s ChatGPT become increasingly sophisticated, policymakers are grappling with how to ensure that this transformative technology is developed and deployed responsibly. In California, State Senator Scott Wiener has introduced legislation aimed at mitigating the potential existential risks posed by AI.
The proposed law, known as SB 1047, has already passed the California State Senate and is making its way to the General Assembly, where it is expected to undergo further amendments. The bill was informed by input from the Center for AI Safety, an organization dedicated to reducing societal-scale risks from artificial intelligence.
Senator Wiener should be commended for his genuine concern for the welfare of humanity. However, while the intentions behind SB 1047 may be laudable, the legislation itself is unlikely to have any meaningful impact on existential risks, which are risks that could lead to the extinction of the human race. To understand why, it is important to examine two main categories of existential risk associated with AI: unintended consequences and intentional misuse.
The first category, unintended consequences, refers to scenarios in which AI systems become so advanced and autonomous that they begin to operate outside of human control. This could occur if an AI pursues its objectives in ways that conflict with basic human values or if it experiences a catastrophic failure that leads to widespread harm. The second category, intentional misuse, involves bad actors, such as terrorist groups or rogue nations, deliberately weaponizing AI to cause harm to humanity.
SB 1047 appears to be primarily focused on addressing the first category of risk, unintended consequences, but the latter category, intentional misuse, may be the bigger concern.
The legislation mandates that developers of advanced AI systems, which are known as “covered models,” conduct safety assessments to identify any “hazardous capabilities.” Required safety measures include the ability to shut down the AI system if it behaves in an unsafe manner and to report any safety incidents to a newly established regulatory body. When a model works as intended, the developer certifies, through a “positive safety determination,” that the technology is safe. This determination will then have to be made annually. The California Attorney General can bring civil actions for violations, with penalties up to 30% of model development costs.
While these provisions may seem sensible on the surface, they are unlikely to be very effective in practice. The very nature of unintended consequences is that some problems are inherently difficult to predict and prevent. The science of AI alignment is not well developed. Thus, even with the most rigorous safety assessments and precautions in place, there is the possibility that an AI system will behave in unexpected and potentially harmful ways.
The shut down switch and incident reporting requirements make the most sense in SB 1047, but even these are not failure proof. For example, a shut down switch on an AI system running the electricity grid creates a vulnerability for potential hackers. Thus, this solution creates new risks even while reducing others.
Furthermore, SB 1047 does almost nothing to address the risk of intentional misuse by bad actors. Those who seek to use AI for malicious purposes are unlikely to comply with the legislation’s requirements, as they are already operating outside the bounds of the law. Many of these bad actors may also be located outside of California’s jurisdiction—likely in other countries—making it difficult to enforce the legislation’s provisions where it matters most.
While SB 1047 may have limited impact on existential risks, it could have unintended consequences of its own. By imposing burdensome regulations on AI development, the legislation risks slowing down innovation and putting California’s technology companies at a competitive disadvantage in the global race to develop advanced AI. This is particularly concerning given that these companies may be our best hope for super intelligent AI that is aligned with human values and interests.
The precautionary principle, which advocates for banning potentially harmful technologies until they can be demonstrated to be safe, can be an appropriate response in cases where there are clear and imminent existential threats. For example, it makes sense to restrict public access to fissile materials like enriched uranium or to prohibit the everyday sale of grenade launchers. The problem is that existential threats from AI are not obviously imminent and the US is locked in a race with China. Moreover, the likely benefits of AI—from revolutionizing research to curing some of the world’s most intractable diseases—are so substantial, withholding them from the public is nowhere near realistic.
In all likelihood, the most significant threats posed by AI will require preemptive measures from military and law enforcement, similar to how America combats terrorism. Protecting against existential threats can be considered a core function of the government, particularly in the realm of national security. This may require increased investment in AI safety research within military and intelligence agencies, as well as the development of specialized forces dedicated to monitoring and countering potential AI threats. International cooperation and information sharing among allies will also be important in identifying and mitigating risks that transcend national borders.
Unfortunately, existential AI risk is something that is going to be an ongoing challenge, not something that can be solved through one-off legislation. Most solutions will have to be implemented at the federal level, where national security issues are usually addressed. Moreover, the private sector alone cannot be expected to shoulder the bulk of the responsibility for ensuring national security, as SB 1047 appears to suggest. While Senator Wiener deserves credit for his focus on the potential existential threats posed by AI, SB 1047 is unlikely to be effective in its current form.
Read the full article here