Denver Mayor Mike Johnston says Denver is “very much open for business” for artificial intelligence companies. Gov. Jared Polis shares the same goal for Colorado.
But the state’s new artificial intelligence law could stop all that — at least according to Reid Hoffman, the founder of LinkedIn and one of the biggest players in tech.
The new state law will “prevent the future of software from being in Colorado, which doesn’t strike me as a particularly smart play,” Hoffman said on Thursday, referring to a bill that passed earlier this year.
Hoffman made the comments as he was interviewed on stage by Mayor Johnston at the DenAI Summit, which is meant to position the city as a hub for AI.
Describing the new law as an “amorphous, poorly defined big accountability stick,” Hoffman said it would “quell investment” in AI.
Colorado’s new law, SB24-205, requires companies to inform people when an AI system is being used, and, if someone thinks the technology has treated them unfairly, allows them to correct some of the input data or file a complaint. It won’t allow an individual to sue over AI use, but sets up a process to look into potential consequences for bad actors.
The law will also require developers to examine AI algorithms for signs of bias, and it created a task force to work on AI issues. Proponents have said it’s important to establish protections and regulations as the technology’s use explodes.
For his part, the mayor didn’t criticize the law. But he gave credit to “the community of business leaders here that rallied after that” to work on potential changes to the law.
Gov. Polis has also expressed concern about the new law stifling AI investment in Colorado saying he wanted to see a national approach instead. though he ultimately signed the bill. The new law is the first of its kind in the nation.
“Whether (people) get insurance, or what the rate for their insurance is, or legal decisions or employment decisions, whether you get fired or hired, could be up to an AI algorithm,” said Democratic state Rep. Brianna Titone, one of the bill’s main sponsors, in an earlier interview with CPR News.
The rest of Hoffman and Johnston’s discussion covered everything from how governments can use AI to improve transportation, public safety and pothole fixing, to the importance of experimenting with new technology, and how AI and autonomous vehicles might get drunk people home safely.
It was a rare glimpse of the mayor interacting with Hoffman, a prime backer of Johnston who spent millions to support his election. It was also a taste of the mayor’s technology agenda.
The pair spoke before a crowd of hundreds at the sold-out summit. Organizers said the one-day event at the Colorado Convention Center was the nation’s first “city-led conference focused on utilizing AI technology to solve hard social problems.”
The city did not contribute any money for the conference, and speakers weren’t compensated.
Here’s what else we heard.
AI won’t “be inclusive on Day One”
Not everyone can afford AI technology, raising questions about whether its benefits will be equitably distributed. When the mayor asked about that, Hoffman cautioned people to not wait for perfection to begin using the tech.
It’s an “iterative process,” and it gets better with time, Hoffman said. “You’re not going to be able to be inclusive on Day One.”
Instead, governments and the public should be in constant dialogue with the private sector about what is and isn’t working, creating specific guardrails rather than blanket prohibitions, the tech executive argued.
Governments should “start playing with AI.”
Hoffman said that both governments and people should start with experiments. For example, in an AI-powered city, residents could get faster responses to complaints about potholes, crimes and other issues. (Of course, AI can’t actually fill the potholes yet.)
Mayor Johnston suggested police officers, who spend hours writing reports, could minimize that task if body cameras were connected to technology that pre-writes reports that officers can approve. That would save time and add objectivity to criminal reports, he suggested.
Hoffman cautioned the public not to just fret about how AI could challenge democracy, harm social ties and create a gap in equity. Instead, he advised: “Find the right people who want to create the future with you.”
Johnston talked about the importance of “taking big swings,” referencing his own effort to reduce homelessness as an example.
AI for traffic (and drunk people):
With more than 40,000 people dying in traffic accidents each year, automated vehicles, empowered by AI, could make the roads safer, Hoffman said.
“Drinking and driving goes from … an evil thing risking people’s lives to something you might do every day,” Hoffman said, referring to the idea that an autonomous car could get an inebriated person home safely.
Johnston said it was another good example of how governments could work with the tech sector.
“I think what we’re after is … pushing private sector innovators to think about ways in which you could tool build or innovate solutions that would have massive markets,” Johnston said.
Government may lose trust if new tech fails.
Johnston acknowledged that the public sector has a “different burden” when it takes risks in employing new technologies. “How do we communicate with the public to make them prepared for that?” he asked.
Hoffman’s suggestion: Tech providers should agree to take the blame for problems with public-private AI endeavors, and they should be compensated for it.
Ultimately, anybody who says they know where artificial intelligence will go in the next five years is either deluding themselves or deluding you, Hoffman said.
As it turns out, we still can’t predict the future.