Friday, November 15, 2024

The Nobel committee just entered the AI chat | CNN Business

Must read

A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free, here.


New York
CNN
 — 

This week, the Nobel prizes for physics and chemistry both featured work on the development and application of artificial intelligence. (Trendy choice? Absolutely. Serious choice? Also yes.)

Even the most cynical of AI doubters would have to tip their caps to these guys — yes, they’re all men — for their work, much of which is too complicated for me to get into in depth here.

But briefly:

  • The Nobel Prize in physics is shared by John Hopfield, a professor at Princeton, and Geoffrey Hinton, aka the “godfather” of AI, for work that was “fundamental in laying the cornerstones for what we experience today as artificial intelligence.”
  • The chemistry prize was awarded Wednesday to three scientists — Demis Hassabis and John M. Jumper of Google-owned AI lab DeepMind, and David Baker, a US biochemist — who used artificial intelligence to “crack the code” of almost all known proteins (which is legit cool, but I promise you’re better off learning about it from the scientist who did a PowerPoint on it at the event in Stockholm.)

When asked by a reporter whether the committee took the AI connection into consideration when judging the nominees, one member on the chemistry committee basically brushed off the question and insisted the decisions were made purely on the science. (I mean, can you imagine? The Nobel committee letting politics or PR weigh on their decisions? *coughbarackobama* *cough coughhenrykissinger*)

What struck me with the back-to-back AI-related awards was how at least two of the recipients hold such fundamentally opposing views about the future applications of the technology they’ve unleashed.

In one corner you have Hinton, an AI pioneer who, over the past year and a half, has quit his job at Google and begun speaking out about the technology’s existential risks. Last year, he told CNN’s Jake Tapper that superhuman intelligence would eventually “figure out ways of manipulating people to do what it wants.”

And in the other corner you have Hassabis, one of the more prominent AI cheerleaders.

Hassabis has been the public face of Google’s AI efforts and describes himself as a “cautious optimist” about the prospect of AI that can outthink human beings. But he is essentially the inverse of Hinton’s doomerism.

In an interview with the New York Times’ Hard Fork podcast in February, Hassabis invoked science fiction — but only the kind with benevolent bots — to describe an idyllic future where “there should be a huge plethora of amazing benefits that we just have to make sure are kind of equally distributed, you know, so everyone in society gets the benefit of that.”

(Which, by the way, is exactly the kind of Silicon Valley bubble answer you hear a lot from people who are already wealthy, work in largely academic settings and assume that everything that’s broken in society is just a design bug that an engineer can fix. Like, you can’t just yada-yada-yada over the equal distribution problem — ask anyone who works in, say, hunger relief. But that’s a rant for another Nightcap…)

At any rate: What should we make of the Nobel Prize’s elevation of these AI pioneers?

At first glance, it could seem like the Nobel committee has been gulping down Big Tech’s AI Kool-Aid.

But, as the Atlantic’s Matteo Wong noted, the Nobel committee’s framing of the awards was refreshingly pragmatic.

While gesturing to generative AI, Wong noticed that no one mentioned ChatGPT or Gemini or any other consumer-facing AI tools that companies are peddling.

“The prize should not be taken as a prediction of a science-fictional utopia or dystopia to come so much as a recognition of all the ways that AI has already changed the world,” Wong wrote.

Similarly, in announcing the chemistry prize on Wednesday, committee members talked a lot about amino acid sequences and structural biochemistry. What you didn’t hear the panel of scientists talk about: a perfect, disease-free, no-work-all-play future made possible by AI.

They talked about AI the way I wish tech executives talked about AI — as a kind of boring, technical tool that runs in the background to help researchers figure stuff out.

And that’s a more compelling, perhaps less lucrative, story, if you ask me, than the one tech executives are inclined to pitch to investors.

Latest article