Saturday, November 23, 2024

Lila Ibrahim, COO of Google DeepMind: ‘Technology only makes sense if it improves people’s lives’

Must read

The corridors of the Google DeepMind offices, in the central London neighborhood of King’s Cross, are almost deserted. The company — a world benchmark in the development of artificial intelligence (AI) — brings together some of the best scientists in this discipline, who work diligently behind closed doors in modern rooms bathed in natural light into which the visitor cannot even peek. What is brewing on the multiple screens of these computers is the present and future of AI, the technology destined to change everything. The secrecy is such that visitors are even accompanied to the bathroom.

Lila Ibrahim, 54, walks slowly but decisively while displaying a welcome Californian friendliness amidst so much British formalism. She can open any of the dozens of security doors planted as firewalls throughout the building with her corporate card, including those with restricted access. Ibrahim is the Chief Operating Officer of DeepMind, one of the key positions of the start-up, which was founded in 2010 by three young scientists and acquired by Google in 2014. Its new owners decided to let the London company remain as a kind of advanced AI laboratory dedicated to multidisciplinary basic research. It wasn’t a bad idea. In 2016, DeepMind developed a robot capable of beating the best Go teachers, the world’s most complex game, where intuition — an attribute theoretically off limits to AI — plays a key role. The company has been able to predict the structure of 200 million proteins, paving the way for a new era of disease treatment. And DeepMind also discovered hundreds of thousands of new materials and are working on a promising nuclear fusion project.

Raised in Indiana by Lebanese parents, Ibrahim is used to being “the weirdo.” She was one of three women in her class when she graduated with a degree in Electronic and Electrical Engineering from Purdue University. She was known on campus as the girl with the purple box. “I used to carry a little box with my transistors and resistors everywhere,” she recalls, laughing. The first stage of her career was at Intel. After working on the design of Pentium processors, she then opted for two new technologies that are now obsolete: USB and DVD. “Back then, no one could imagine being able to watch a movie on the computer. Today, my daughters don’t know what a DVD is.” She has taken her 14-year-old twin daughters across the world: they have visited 40 of the 106 countries that Ibrahim has been to. “I learned from my parents that you have to work hard, but also play hard.”

Ibrahim later moved to a large venture capital firm and, in 2018, she was hired by Google DeepMind. She misses the California sun, which is rarely present in London, but is dazzled by the diversity and historical heritage of the British capital, which in her opinion, translates into a warmer vision of AI, compared to the one in Silicon Valley. “We have the possibility of developing AI responsibly to change the world.” She also spends part of her free time on doing just this. She set up a computer laboratory in the Lebanese orphanage where her father, who died last year, grew up. It was followed by three other centers. “Technology only makes sense if it improves people’s lives,” she says.

Q. How has AI become the most influential technology of today?

A. I think several things are happening at once. The first is that AI, from a technological perspective, has become more advanced. And now we have enough computing power to exploit it, so we can start to imagine how to use it to solve problems. We are facing major global challenges, and we are having difficulty finding solutions. Part of the reason I’m here has to do with Google DeepMind’s mission: how we can use AI to help us advance solutions and provide advanced science.

Q. Should we fear AI? What are the threats posed by its widespread use?

A. AI has the potential to be the most transformative technology of our time. And, therefore, we must take exceptional care with it. We think about risks all along the way. In the short term it spreads biases and misconceptions, misunderstandings. And that is very real, since it could perpetuate stereotypes that continue to exist in the world. In the long term, we are talking about who is in control of this technology, how we can understand what is happening, and how to make sure it doesn’t become so powerful that humans can’t control it. We are aware that our duty to humanity is to ensure that we carefully develop and shepherd this technology.

Q. How exactly do you approach that on a day-to-day basis?

A. We have worked hard to develop a culture of responsibility. The best thing about DeepMind is that since its founding in 2010, we have taken a truly responsible approach. You have to ask yourself whether you have the right governance and processes, but also whether you have the right talent and control points. We approach the problem in terms of a culture of responsible leadership. And we analyze it based on three different elements. First, if we have the right chain of responsibility, a good governance system. Second, whether our research is responsible and safe. And third, how do we think about our impact as we launch technology to the world? What are the risks and opportunities? What do we do to mitigate the possible negative effects?

Q. What do the control points you mention consist of?

A. In the research phase, we evaluate several aspects. How do the models perform compared to what was predicted? Are we making pilot teams to really start testing the model and trying to find the break points? We carry out this process internally, but also with the support of external groups that could help us carry out these tests with a view uncontaminated by our own approach.

After the research phase, the second area we focus on is technique: what we are incorporating into the models to ensure the right controls are taken into account. Then we also do sociotechnical tests. And the last thing is to evaluate the responsible impact of technology. Even though we have made so much progress, AI is still in a very early phase of its development, as if we were on the first rung of a very long staircase. And it’s important that when we do tests, we adopt a mindset of continuous improvement. The more people access the technology, the more it will be tested. We have to be able to respond very quickly when we find out that things are not going as we expected.

Lila Ibrahim, at the Google DeepMind offices in the London neighborhood of King’s Cross.Manuel Vázquez

Q. Give me a concrete example.

A. When we launched AlphaFold [the tool that has predicted the structure of 200 million proteins], we spoke to more than 50 experts in the field because we wanted to make sure we understood what the potential risks could be as well as the opportunities. Although we have bioethicists, we spoke to others, including Nobel Prize winners, who encouraged us to launch it. We released the first version in association with the European Bioinformatics Institute.

Q. Google had been working on its own large language model for years, but it remained in the experimental phase because it was thought to need further testing. Suddenly, OpenAI comes along, launches ChatGPT, and just three months later, Google launches Bard. Were all Google’s doubts resolved in such a short time?

A. Well, from my point of view, a lot of that had to do with the acceptance of the tool and market demand. We thought the technology needed more innovation, to be more reality-based and more factual. We were surprised that the market was really ready for this technology. So when we launched it, we asked ourselves: how do we make sure we are clear on what this is and what alternatives can be offered? Gemini [the model that replaced Bard] is still an experimental technology. It’s about upholding our own principles and values while providing access to technology that people want. Indeed, we had an extensive white paper on large language models before ChatGPT was launched. The first one we did at Google DeepMind is from 2021, but we didn’t publish it until we finished the socio-technical paper.

Q. To what extent have these checks and balances changed with the emergence of generative AI? Is this variant of AI more untamable than the others?

A. Technology is developing very quickly. When we identify something happening that is unexpected, we take quick action and try to understand the root cause. And I think that’s what happened recently with the Gemini app situation [referring to an attempt to correct its biases that caused it to generate images of Black Nazis]. We are learning as an industry. The Transformer algorithm, the touchstone of language models, was developed at Google in 2017. We’ve had this technology for years. For me, who doesn’t come from the AI world, when I started interacting with language models, it was incredible. But I couldn’t tell anyone because it wasn’t available in the outside world. And we were not sure about the ability to launch this technology because it still had hallucinations [making things up], it gave wrong answers. I have been surprised to see that society is almost willing to tolerate failures in technology in order to continue testing it. And we really need the world to help us test it.

Q. A strange phenomenon is taking place around AI. On the one hand, companies put in place mechanisms to make users feel safe with these tools. But on the other hand, the executives of some of these companies, such as yours, sign manifestos such as the Extinction Risk Statement, which proclaims that AI is potentially as harmful as pandemics or nuclear war.

A. I signed it, yes.

Q. Is it possible to say that there is nothing to fear and at the same time warn that this technology can annihilate us?

A. Well, I can talk about my own experience in this regard. Let me tell you how I ended up here. When I interviewed for this position, I had no background in AI or machine learning. I spent 30 years in technology. I lived in Silicon Valley, but I had traveled a lot of the world and I thought, why London, why AI? A mentor of mine encouraged me to talk to Demis [Hassabis, co-founder and CEO of Google DeepMind]. The more I talked to him, the more excited I got because I thought, if we could get it right, AI could change a lot of things. So I was very excited and feeling optimistic. The more I learned, the more I liked it, but the more I was also concerned. And I thought, if I could bring my 30 years of experience bringing computers to new communities or the internet to cities and countries that hadn’t had access before to this key moment where AI is, maybe I would have a real impact on all of this. After the interview, I looked at my daughters and thought, can I make them sleep peacefully every night? So it was a very serious decision for me. In the end, I decided to go in because I feel I have a moral obligation to try to make AI improve our lives.

Q. How do you approach the existential risks included in the declaration?

A. When I signed it, I felt that there were a lot of things that could go right and a lot of things that could go wrong. I hope we never get anywhere near the existential risk, but in order for me to feel like I’m doing my job as a leader I have to sign the document, because I put in writing that we have to take the risks seriously and we have to have an open conversation about it. If we do that, we will avoid catastrophe. We have to handle this transformative and extraordinary technology with the care it requires. That includes international collaboration and regulation.

Q. Which project would you highlight from the ones that Google DeepMind has underway right now?

A. I think much of the work in the field of science is particularly exciting. Thanks to AlphaFold, we are seeing the advances that come with understanding proteins in their internal interactions with DNA, RNA and ligands. [This is important] for understanding diseases, but also for the health of crops and the enzymes that feed on plastics and industrial waste. Understanding protein structure has unlocked a new way of thinking for scientists tackling really big challenges. I would also highlight our materials science discovery last year: we went from 40,000 known materials to perhaps hundreds of thousands. That could mean, for example, new and better technology for electric vehicle batteries. We have also provided models capable of making very accurate weather forecasts 10 days ahead, which will help us navigate extreme weather events.

Q. When Google decided to integrate DeepMind into its AI division, many people thought that basic research would be replaced by the development of tools for the general public, like Astra. Has it been like that?

A. We have always worked on products, although this was not necessarily known to the public. We have increased the battery life of Android phones. We collaborate with our colleagues at Google to optimize the energy consumption of data centers. At the same time as we were taking a much more active approach to generative AI with Gemini, we launched Alpha Missense [a tool that predicts the effect of protein mutations]. We believe there is still a lot of AI research to be done to improve models. My job is also to think about how we organize our teams so they can hire fantastic talent. We give them the space to thrive in their area of expertise. And then, those researches are translated into products and services.

Q. You mentioned talent: I haven’t seen too many women in the hallways.

A. We’ve worked very hard to create a more diverse organization. We have employee resource groups around everything from moms and dads to women and LGBTQ+ people and also different ethnicities. We try to foster the collaborative nature of the team. Could we do any better? We certainly could. One of my main efforts right now is for us to continue to set the bar very high in recruiting and developing talent internally. We also need to make sure we connect with communities that are underrepresented externally, because it’s important to make their voice heard in the work that we do, especially in terms of our aspirations around AI.

Q. How do you address the growing environmental footprint of AI, which requires a lot of energy, water and minerals?

A. It’s something we think about a lot. We are creating smaller, more efficient models, with less processing, like the Gemini 1.5 Flash. We have also been able to reduce energy consumption in Google’s data centers by approximately 40%, simply by using AI as an optimization tool. We will try to continue reducing our footprint.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Latest article