Friday, November 22, 2024

Alphabet, Google CEO Pichai presents vision for AI future

Must read

By Christopher Lessler and Nina McCambridge

Sundar Pichai, the chief executive officer of Alphabet Inc. and Google, discussed Google’s usage of artificial intelligence on Wednesday, Sept. 18. He also addressed the importance of AI safety and sustainability. Courtesy of Carnegie Mellon

On Wednesday, Sept. 18, CEO of Alphabet and Google, Sundar Pichai, spoke to 1,500 attendees in Carnegie Mellon’s Highmark Center as the first lecture of the 2024–25 President’s Lecture Series. The topic was “The AI Platform Shift and the Opportunity Ahead,” focusing on Google’s AI advances and Pichai’s vision for a future driven by AI.

Pichai first gave a brief lecture, after which President Jahanian went on stage for a fireside chat, including some questions that he had generated with AI chatbots and Copilot. After the event, Pichai sat down with The Tartan for an exclusive interview.

Before the talk, Pichai went on a tour of Carnegie Mellon’s robotics laboratories, meeting with students, faculty, and staff along the way. Earlier in the day, at Google’s Pittsburgh office, Pichai signed a $25 million collection of grants to support generative AI in children’s education.

The talk, as the title suggests, focused mainly on the future potential of artificial intelligence, including discussing Google’s role in ensuring AI safety and accessibility. Pichai predicted that the advancement of AI models will entail a “fundamental rewiring of technology and an incredible acceleration of human ingenuity.”

This is because AI will “help humans unlock more of their creative potential, when you consider how much of our time and effort go into dealing with mundane things” that AI tools could take care of for us. Pichai described how “technology begins to feel like a natural extension, augmenting human capability, bridging gaps in expertise and experience and breaking down the barriers of language and accessibility.”

Pichai described how it’s easier than ever to use these technologies. “What used to cost $4 per million tokens now costs 13 cents, and this trend is going to continue.”

This sustained period of innovation in AI, Pichai thinks, began “around 2010 — that’s when we brought in Geoff Hinton’s team.” Geoffrey Hinton, a famous AI researcher, quit Google in 2023 over AI safety concerns. “Geoff wanted to make sure that as the technology is getting developed, there is enough focus on the safety implications of it, and he wanted to be able to speak about it from a neutral vantage point,” said Pichai in his interview with The Tartan.

Pichai described some AI innovations specific to Google. With regard to Gemini, the large language model, Pichai says that Google “lead[s] the industry in progress with long context. It has 2 million tokens of long context, more than any other model, and it’s the first model which is built natively to be multimodal.”

Pichai said consumers have been especially enthusiastic about making visual queries, and that Google has “early data showing that’s how people want to interact.” Alphabet also owns Waymo, whose driverless taxis are ubiquitous in San Francisco, Phoenix, and Los Angeles.

He mentioned in the fireside chat that Waymo riders tend to be impressed by the self-driving taxis for the first few minutes before becoming adapted to it and simply using their phones, like in any typical taxi or rideshare, showing just how quick consumers acclimate to new technology.

Pichai said the same phenomenon is happening with AI. He went over a number of successful DeepMind projects, such as AlphaFold and AlphaProof. Pichai played a demo video of Google DeepMind’s Project Astra, which has the tagline “A universal AI agent that is helpful in everyday life.” Google says the demo video has just two continuous takes, one with the prototype on a Google Pixel phone and the other with “a prototype glasses device.”

One concern with generative AI is that it will pollute the information environment of the internet, confusing real and synthetic content. To prevent this, Google is working on a project called SynthID, which Pichai described as being in the “active research stage.” Pichai said that Google is “working hard to make sure that if you see an image, you want to be able to ask Google, ‘When was this image first created?’”

During the fireside chat, President Jahanian asked Pichai about energy sustainability, citing AI’s notoriously heavy use of electricity and computing power. Pichai responded by reiterating a commitment to sustainability, mentioning Google “invested very early in wind and solar because we saw the opportunity there. And today, many of our biggest data centers operate on a 90 percent carbon free basis.” Google has been using geothermal energy to provide clean electricity to their data centers.

Pichai said figuring out energy sustainability with AI is going to be challenging in the short term, which he said was unfortunate. But he argues that in the long run, we won’t have to pick between AI and sustainability. Pichai said that “the amount of money going into SMRs, small modular reactors for nuclear energy” makes him “optimistic in the medium- to long-term.”

Plus, he said, AI may actually help solve its own energy sustainability problem. Pichai also described Google AI projects that could help predict some effects of climate change, such as their weather forecasting and wildfire boundary prediction projects.

During his interview with The Tartan, Pichai expanded on his views on AI safety. He said that “making a deep commitment to AI safety as a foundational value as we’re approaching building AI is something we plan to do.”

Pichai said that “in certain cases, there have been areas where we have said, ‘This area needs more safety testing and so on before we can deploy it out in the world.’” Google has a number of methods they use to test the safety of their models. Pichai claimed, “Google is best in class when it comes to safety and security practices, in terms of how we approach software development in general.”

Pichai was, in general, quite optimistic about the future of artificial intelligence. “I think it’s a technology that will have a lot of extraordinarily positive implications — but I do think we will have hiccups, you know, in a big way, as we go through it.” Technological progress, he said “is never smooth. How do we make sure we navigate that well over time?”

Regarding the way the labor force will have to adapt, the grants supporting education are one of Google’s ways of managing this technological transition.

With the disruptive technologies that led to the agricultural and industrial revolutions, Pichai said, “there was less anticipation and getting ahead of the issues. I think with AI, we have an opportunity to do that better, so that’s why I think these conversations are important. I’m glad we are having them early.”

AI can be a way to unleash human potential, Pichai reiterated, “an empowering tool for people to create.” However, “anytime when there’s a technology shift, there may be some catch-up to do in terms of getting quality again, like it happened with the early days of the web.

That’s what Google Search did. It tried to sift high quality content from not so high quality content on the web.” He said that “society will place a premium on human voices, you know, and authentic content. And so our job is to make sure those things shine through our products.” SynthID is one way that Google is researching this issue.

Pichai had some advice for Carnegie Mellon students. The best way to accomplish your goals, he told The Tartan, is by “trying hard to put yourself in situations which effectively is a bit uncomfortable because you have to learn and grow, and you actually quite don’t know what you’re doing.”

Latest article