If you’ve encountered headlines about California’s proposed legislation to establish safety guardrails around artificial intelligence, you might think this is a debate between Big Tech and “slow” government.
You might think this is a debate between those who would protect technological innovation and those who would regulate it away.
Or you might think this is a debate that decides if AI development will stay or leave California.
These arguments could not be more wrong.
Let me be clear: Senate Bill 1047 is about ensuring that the most powerful AI models — those with the potential to cause catastrophic harm — are developed responsibly. We’re talking about AI systems that could potentially create bioweapons, crash critical infrastructure or engineer damage on a societal scale.
These aren’t science fiction scenarios. They’re real possibilities that demand immediate attention.
In fact, the bill has been endorsed by many of the scientists who invented the field decades ago, including Yoshua Bengio and Geoffrey Hinton, the so-called “godfathers of AI.”
Critics, particularly from Silicon Valley, argue that any regulation will drive innovation out of California. This argument is not just misleading — it’s dangerous. The bill only applies to companies spending hundreds of millions on the most advanced AI models. For most startups and researchers, it’s business as usual. They will feel no impact from this bill.
Fearmongering is nothing new. We’ve seen this kind of pushback many times before. But this time, major tech companies like Google and Meta have already made grand promises about AI safety on the global stage. Now that they are finally facing a bill that would codify those verbal commitments, they are showing their hand by lobbying against common sense safety requirements and crying wolf about startups leaving the state.
Some of the most vehement opposition comes from the “effective accelerationist” wing of Silicon Valley. These tech zealots dream of a world where AI develops unchecked, regardless of the consequences. They list concepts like sustainability, social responsibility and ethics as enemies to be vanquished. They feverishly dream of a world where technology replaces humans, ushering in “the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.”
We’ve seen this kind of polarization play out before, albeit less intensely. Social media companies promised to connect the world, but their unregulated growth led to mental health crises, election interference and the erosion of privacy.
We can’t afford to repeat these mistakes with AI. The stakes are simply too high.
Californians understand this. Recent polling shows that 66% of voters don’t trust tech companies to prioritize AI safety on their own. Nearly 9 in 10 say it’s important for California to develop AI safety regulations, and 82% support the core provisions captured in SB 1047.
The public overwhelmingly supports policies like SB 1047 — it is just the loud voices of Big Tech attempting to drown out the opinions of most Californians.
As a young person, I often feel as though I get mischaracterized as being anti-technology — for being this century’s Luddites. I reject that completely. I’m a digital native that sees AI’s immense potential to solve global challenges. I am deeply optimistic about the future of technology. But I also understand the need for guardrails.
My generation is the one that will inherit the world shaped by today’s decisions. We deserve a say in how this technology develops.
For lawmakers and ultimately Gov. Gavin Newsom, the choice isn’t between innovation and safety. It’s between a future where AI’s benefits are shared widely and one where its harms fall disproportionately on the shoulders of vulnerable groups and young people like me.
SB 1047 is a step towards the former, a future where California leads not just in technological innovation but in ethical innovation.
Sunny Gandhi is the vice president of political affairs at Encode Justice. Gandhi wrote this column for CalMatters.
Originally Published: