Sunday, December 22, 2024

Opinion: California must not fall for tech industry’s false choices about AI

Must read

If you’ve encountered headlines about California’s proposed legislation to establish safety guardrails around artificial intelligence, you might think this is a debate between Big Tech and “slow” government.

You might think this is a debate between those who would protect technological innovation and those who would regulate it away.

Or you might think this is a debate that decides if AI development will stay or leave California.

These arguments could not be more wrong.

Let me be clear: Senate Bill 1047 is about ensuring that the most powerful AI models — those with the potential to cause catastrophic harm — are developed responsibly. We’re talking about AI systems that could potentially create bioweapons, crash critical infrastructure or engineer damage on a societal scale.

These aren’t science fiction scenarios. They’re real possibilities that demand immediate attention.

In fact, the bill has been endorsed by many of the scientists who invented the field decades ago, including Yoshua Bengio and Geoffrey Hinton, the so-called “godfathers of AI.”

Critics, particularly from Silicon Valley, argue that any regulation will drive innovation out of California. This argument is not just misleading — it’s dangerous. The bill only applies to companies spending hundreds of millions on the most advanced AI models. For most startups and researchers, it’s business as usual. They will feel no impact from this bill.

Fearmongering is nothing new. We’ve seen this kind of pushback many times before. But this time, major tech companies like Google and Meta have already made grand promises about AI safety on the global stage. Now that they are finally facing a bill that would codify those verbal commitments, they are showing their hand by lobbying against common sense safety requirements and crying wolf about startups leaving the state.

Some of the most vehement opposition comes from the “effective accelerationist” wing of Silicon Valley. These tech zealots dream of a world where AI develops unchecked, regardless of the consequences. They list concepts like sustainability, social responsibility and ethics as enemies to be vanquished. They feverishly dream of a world where technology replaces humans, ushering in “the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.”

We’ve seen this kind of polarization play out before, albeit less intensely. Social media companies promised to connect the world, but their unregulated growth led to mental health criseselection interference and the erosion of privacy.

We can’t afford to repeat these mistakes with AI. The stakes are simply too high.

Californians understand this. Recent polling shows that 66% of voters don’t trust tech companies to prioritize AI safety on their own. Nearly 9 in 10 say it’s important for California to develop AI safety regulations, and 82% support the core provisions captured in SB 1047.

The public overwhelmingly supports policies like SB 1047 — it is just the loud voices of Big Tech attempting to drown out the opinions of most Californians.

As a young person, I often feel as though I get mischaracterized as being anti-technology — for being this century’s Luddites. I reject that completely. I’m a digital native that sees AI’s immense potential to solve global challenges. I am deeply optimistic about the future of technology. But I also understand the need for guardrails.

Latest article