The tech industry has been racing to put generative artificial intelligence into the hands of consumers, but that’s only “a taste of its potential,” an AI executive says.
Sissie Hsiao, vice president and general manager of Gemini app and Speech at Google (GOOGL), told Quartz that she believes the company is “going to help in ways people haven’t even thought of yet” in the next year.
While “AI assistants” currently work alongside users and need repeated prompting, consumers will start seeing them “evolve beyond simple conveniences and into true, personalized, advanced experiences that you rely on every day.”
For example, Hsiao said, people are using Google’s Gemini AI for more advanced tasks, such as practicing for job interviews with Gemini Live and debugging code with Gemini Advanced.
In November, Google launched an iPhone app for Gemini that included the new Gemini Live voice assistant feature, which can handle natural conversations with interruptions and topic changes. So far, Gemini Live offers 10 distinct voice options and supports 12 languages, including Spanish and Arabic. Google said it plans to roll out more languages.
In 2025, Hsiao said, the “next frontier” of AI is in “agentic capabilities.” AI agents are software that can complete complex tasks autonomously.
Gemini, specifically, “will be deeply personalized, remember what you’ve told it before, and at your direction — be able to act on your behalf across Google, third-party services, and the web,” Hsiao said.
Google recently launched a new feature in Gemini Advanced called Deep Research, which uses AI to explore complex topics and turn findings into easy-to-read reports for users. Hsiao called Deep Research “the first feature” in Gemini “that brings our vision of building more agentic capabilities into our products to life.”
The AI market in the next year will be “about continuing to build the complete ecosystem,” Hsiao said, adding that she sees it similarly to the smartphone market.
“It’s not just about the hardware anymore, but the entire ecosystem of apps, services, and integrations that surround it,” Hsiao said. “Similarly with AI, how well we execute on building the most comprehensive and user-friendly ecosystem is imperative.”
Google is focused on making Gemini the “most helpful personal AI assistant” in 2025, Hsiao said, adding that the key to doing so is by incorporating AI into the everyday lives of users, and making the daily routine integration seamless.
Earlier this year, Google launched Gemini Live, a mobile conversational experience that allows users to have free-flowing conversations with the chatbot.
“Being able to speak to Gemini when brainstorming new ideas or rehearsing for an important conversation has been a game changer,” Hsiao said. “Going forward, there’s going to be even more of a focus on features that make interacting with AI even more easy, accessible and utilitarian.”
And 2025 will see AI-focused tech companies continue developing multimodal AI — or models that can process different types of data beyond text, such as speech, image, and video.
At Google’s annual I/O developers conference in June, for example, the company unveiled Project Astra, which is a peek into a future of multimodal AI assistants.
“Since 2016, we’ve said Google is an AI-first company, and that won’t change,” Hsiao said. “AI is a must-have, and as we see it being integrated into every aspect of a company’s operations from product development, customer service, to marketing, and sales — it’s essential to embrace this technology in order to stay competitive.”