Thursday, October 10, 2024

Google DeepMind cofounder Demis Hassabis’ Nobel Prize win shows the promise of AI in science

Must read

Hello and welcome to Eye on AI. In this edition…Amazon taps AI for easier package delivery; Anthropic cuts batch processing costs in half; and advertisers temper their excitement about AI.

AI is making a splash at the Nobel Prizes this week. AI pioneers John Hopfield and Geoffrey Hinton won the 2024 prize for physics for their machine learning breakthroughs that led to today’s AI boom. Then yesterday, Demis Hassabis and John Jumper of Google DeepMind, along with David Baker, a professor of biochemistry at The University of Washington, were awarded the prize in chemistry for discovering techniques for predicting and designing novel proteins that could transform how therapeutic drugs are made. 

A 50-year dream

Heiner Linke, chair of the Nobel Committee for chemistry, said in a press release that the researchers taking home the chemistry award “fulfill[ed] a 50-year-old dream.” That dream was really the vision of a previous Nobel Laureate, chemist Christian Anfinsen, who back in 1973 postulated that it would be possible to predict the shape of a protein based solely on knowing its DNA sequence. A corollary of this idea was that it would be possible to manipulate DNA to design proteins, which are the building blocks and engines of life, with specific functions, since it is mostly a protein’s shape that determines what it does.

Progress on achieving this dream began in 2003 when Baker used the 20 different amino acids found in proteins to design new proteins unlike any other. Then Hassabis and Jumper made a stunning breakthrough in 2020 with their machine learning model AlphaFold2, which enabled them to predict the structure of virtually all the 200 million proteins that researchers have identified. The model has since been used by more than two million people from 190 countries, according to the press release. 

The award is a full-circle moment for Hassabis, who began his AI pursuits by teaching computers to master games like Go but always dreamed bigger. Even more so, it represents the best of what AI can offer humanity. 

From a distant vision to the top prize in science

More than a decade ago, Hassabis was looking toward a future where AI models would make monumental scientific breakthroughs. In 2014, when his AI lab DeepMind was still largely focused on teaching machines to play games and just shortly after selling to Google, Hassabis told MIT Technology Review about his vision for “AI Scientists.”

“But Hassabis sounds more excited when he talks about going beyond just tweaking the algorithms behind today’s products,” reads the article, following his mentions of how AI could be used to refine YouTube’s recommendations or improve the company’s search. “He dreams of creating ‘AI scientists’ that could do things like generate and test new hypotheses about disease in the lab.”

DeepMind began working on protein folding in 2016, and by 2018, was winning awards for the first version of AlphaFold. The company followed up with AlphaFold 2 two years later, and in July 2022, announced it had successfully predicted virtually all known proteins. Earlier this year, operating now as Google DeepMind, the research lab unveiled AlphaFold 3, which it says can predict the interactions of proteins with DNA, RNA, and various other molecules and provides significant accuracy improvements over the previous model. 

Overall, it’s an amazing feat for Hassabis—as well as a clear showing of how rapidly AI is being developed and improving. Ten years ago, this was all just a vision. This week, the breakthrough is real, its impacts are being felt around the world, and it’s just been awarded the top prize in science. 

AI’s most positive impact

While the use of AI models in many industries is contentious—with some business executives doubting the value of today’s AI software to deliver a financial return—AI’s impact in scientific discovery is already starting to come to fruition, as breakthroughs like AlphaFold make clear. I’m often asked what positive impact AI can have on humanity or what I think is the most exciting way AI is being used. Scientific research and medical breakthroughs is always my answer. 

This summer, an AI model developed by Cambridge scientists showed an 82% accuracy in predicting the progression of Alzheimer’s disease, outperforming clinical tests. Several AI-discovered drugs have advanced into Phase I and Phase II tests, including just last week with a cancer treatment from Recursion. 

AI’s success in fields like drug discovery and medicine is in no way guaranteed or free from issues like bias, but it’s an obviously worthy pursuit with some early accomplishments worth celebrating. 

And with that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

Amazon taps AI to show drivers which packages to grab for delivery. Called Vision-Assisted Package Retrieval (VAPR), the new technology being outfitted into the company’s delivery trucks will highlight packages with a green or red light to indicate which are intended for delivery at the current stop. There will additionally be an audio cue to let drivers know they’ve selected the right package. Amazon hopes the tech will make the process easier and more efficient, eliminating the need for drivers to shuffle through their trucks. You can read more in TechCrunch.

OpenAI projects it could lose up to $14 billion in 2026 and won’t turn profitable until 2029. That’s according to a story in The Information, which cited OpenAI financial documents it said it had seen. The company also projects it will spend more than $200 billion by the end of the decade, with much of that amount going to the expense of training new AI models. Total losses might tally $44 billion between 2023 and 2028. The document also show that OpenAI project making $100 billion in revenue by 2029, with ChatGPT continuing to account for the majority of sales. The company’s current cash burn, however, is depicted as less than some earlier news accounts had suggested, with the company having expended only about $340 million in the first half of 2024 and with a balance sheet still showing $1 billion in cash on hand prior to its most recent $6.6 billion fundraising round, which valued the company at $157 billion.

Anthropic cuts batch processing costs in half with a new API. With the Message Batch API, Anthropic says developers can send batches of up to 10,000 queries per batch, which will cost 50% less than standard API calls. The processing will be turned around within 24 hours, offering a way to save costs on tasks that are not time sensitive. The new API is now available for Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku and offers a solution to one of the main problems associated with large language models—the expensive cost of inference. You can read more from VentureBeat.

Advertisers temper their expectations for AI as early efforts fall flat. After a lot of early hype, the industry is growing skeptical and starting to believe that many of the new AI tools aimed at advertisers don’t offer as big of a leap forward as they hoped. Business Insider reports the shift was evident at New York’s Advertising Week, where “industry insiders seemed to talk as much about the limitations as the promises of AI.” Still major agencies are planning significant investments in AI. 

AI implementation has been a big mess, or at the very least, an extraordinary challenge. That’s according to a feature I wrote this week for Mercury’s Meridian magazine, where I dive deep into the many challenges businesses have been facing as they implement AI into their products and internal processes. To name a few, companies are struggling to wade through the hype, figure out what use cases AI is good for, navigate fast-moving regulation, protect their IT stacks from AI sprawl, deal with hallucinations, and confront a variety of intricate copyright, security, privacy, and compliance concerns. And that’s not even counting the technical challenges.

FORTUNE ON AI

Exclusive: Zoom’s future isn’t video, it’s AI for work, says CEO Eric Yuan–but can it challenge Microsoft and Google? —by Sharon Goldman

The U.S. wants to stop Google from monopolizing the nascent AI search market —by David Meyer

New Nobel Prize winner, AI godfather Geoffrey Hinton, says he’s proud his student fired OpenAI boss Sam Altman —by Christiaan Hetzner

Wimbledon will evict line judges from its tennis matches after 147 years—and turn to AI instead —by Prarthana Prakash

Whirlpool CIO says lessons learned from IoT hype cycle can apply to generative AI —by John Kell

AI CALENDAR

Oct. 22-23: TedAI, San Francisco

Oct. 28-30: Voice & AI, Arlington, Va.

Nov. 19-22: Microsoft Ignite, Chicago

Dec. 2-6: AWS re:Invent, Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)

EYE ON AI NUMBERS

20,000 to 34,000

That’s how many users are interacting with the Rabbit R1—the $200 AI gadget that can function as a digital assistant, performing actions like call an Uber—every day, according to an interview with CEO Jesse Lyu on the Decoder podcast. During the episode, Lyu pushed back on previous reporting from Fast Company that said the company has only 5,000 daily active users, saying he told the publication 5,000 people are using the R1 at any given moment, not per day. Fast Company corrected its article. The entire interview is an interesting—and at times heated—conversation about this new type of device, where AI is headed, and what happens when the services the R1 is connecting users to (Spotify, Uber, etc.) decide they don’t want the company playing middle man. 

Latest article