Monday, December 23, 2024

Hey Google, why isn’t Gemini available on Android Auto yet?

Must read

Whether you agree with it or not, Google sees AI as the future of the company. Over the last 18 months, we’ve watched as Google has squirmed under pressure from OpenAI and Microsoft, launching (and rebranding) Bard, overhauling search to include those controversial AI Overviews, and cementing its various Gemini LLMs as the driving force behind everything from Photos to dozens of Workspace apps. And yet, despite this seemingly singular focus on delivering an AI-powered experience — whatever that even means — to nearly every app, it still hasn’t come to the one place I’d like to see it the most: my car.



As it stands, Android Auto — along with Android Automotive, Google’s dedicated in-car OS — is sorely lacking when it comes to AI abilities. This year, AI-powered message summaries rolled out to drivers everywhere, but much like Google’s summaries in Recorder, it’s pretty limited in action, taking extra-long messages and condensing them down to their core meaning at the risk of losing specific details.


It’s a good-enough starting place, but it’s also not how I think AI-generated responses could really benefit drivers. After all, you aren’t reading your messages, they’re being read to you, making your inbox about as dangerous as listening to a podcast during your commute. It shows a real lack of innovation on Google’s part, in fact, to not have the car become a centerpiece for everything Gemini is capable of doing. Let me explain.


Android Auto is great, but Assistant can’t keep up

I’m not sure Assistant makes me a safer driver

I love driving with Android Auto. Google has built itself an excellent, car-friendly interface that feels simple enough to fit shorter drives and robust enough to be an essential road trip companion. But Android Auto isn’t perfect, and one of its biggest problems comes from an obvious overreliance on Google Assistant.


While Assistant is adequate enough for basic tasks like sending a text message, it’s pretty miserable for longer or more complex jobs. This isn’t just a car problem; try asking a Nest speaker or smart display for specific information, and you’ll likely end up with a list of links sent to your Android phone. But when you’re driving, those interactions need to be perfect. Links to a bunch of web pages don’t do anything for you behind the wheel, while misunderstood queries can actually lead to more distractions, as you either repeat yourself out of frustration or attempt to peck around Auto’s interface.

All of this is to say that your car is begging for Gemini’s level of language processing. The biggest problem facing Assistant at the moment is a lack of understanding wordy or confusing requests — you know, the kinds you might spit out while your brain is focused on merging lanes or not missing an upcoming exit. Gemini (or, at least, an idealized version of Gemini) could effectively provide a far more accurate, and therefore safer, experience on the road.


Related

Two weeks in, I’m still conflicted about Gemini as a digital assistant

Google’s AI chatbot is an incredible app, but it’s hard to get fully on board

Assistant struggles with wordy, natural requests

And I think Gemini might be the answer

Android Auto Wallpaper Support

Just to drive this point home — no pun intended — here are some examples of how Assistant fails in its current state. Outside of quickly replying to messages, the thing I use Assistant in my car for the most is looking for music on Spotify, whether it be songs, albums, or playlists. Spotify’s UI on Android Auto is, frankly, abysmal — it shoves podcasts and audiobooks down your throat in a way that makes it virtually impossible to find most of your collection without having to stare at the display. It’s arguably less safe than using your phone, leaving Assistant as my usual go-to way of navigating my collection.


A song called "my liked songs" playing on Android Auto.

Except Assistant isn’t very good at finding what I’m looking for either. On a recent drive, it took me four tries to pull up my Daylist — Assistant kept correcting to “playlist” and playing my liked songs. In another example, I tried to pull up my liked songs using my voice. I first requested “my liked songs playlist,” only for Assistant and Spotify to deliver a spam song of the same name. A second search, this time limited to “my liked songs,” worked fine.

Although Gemini doesn’t currently support Spotify, it does support YouTube Music, and we know a Spotify-specific extension is in the works. In theory, Gemini in my car would be able to take a more natural request to shuffle my collection of liked songs on Spotify and deliver what I’m looking for every time, without fail. That’s not just a more convenient experience — it’s a safer one.


Spotify playing on Android Auto while Assistant hears me ask for Mexican restaurants.

Okay, another example. You can try to pull up destinations while you’re driving, but unless you really know what you’re looking for — or it’s something as generic as nearby gas stations or pharmacies — Assistant is going to struggle to recommend nearby locations. While sitting in a parking lot, I asked Assistant to give me directions to the best Mexican restaurant within 15 miles of my location. Assistant responded by giving me directions to Mighty Taco, a Buffalo-specific fast food chain, that was literally right in front of me.

A photo taken from a parking lot of a fast food restaurant in the distance.


Like, seriously, it was right in front of me.

Outside of the car, Gemini responds to the same prompt by giving me several different recommendations based on online customer reviews. And while I might not necessarily agree with every single response, it’s far more of an accurate response than what Assistant gave me, understanding my prompt and delivering a list of suggestions worth trying. In the car, you could make this even easier by simply navigating me to the highest rated restaurant in the area, rather than giving me several different options and the context surrounding them.

Related

5 things Google needs to fix on Android Auto

These are the essential fixes Android Auto desperately needs right now

Android Auto might be the place Gemini shines brightest

We just need Google to actually bring it to the platform

A vehicle's infotainment display


Over the past year, we’ve watched as companies like Google and Microsoft have, to very limited success, struggled to find areas where AI can actually make a meaningful difference in the lives of consumers. I, like many of you, sat through Google’s I/O keynote a month ago and, aside from a couple of very specific examples, struggled to feel particularly excited about anything presented on stage. It was, more than ever, a show made not for users or developers, but shareholders.

And yet, I think cars are exactly the sort of space something like Gemini — in a pared-down, less-chatty version — could shine. It’s an obvious upgrade in understanding requests from what we’ve previously seen with Assistant, offering better answers and, in theory, delivering a safer experience. When I’m driving, the last thing I want is to spend time thinking about how to word specific requests, and the language processing Gemini is capable of does away with exactly that. I’m sure Gemini for cars is in the works, presumably timed for whenever Google’s chatbot actually replaces legacy Assistant. I just hope it shows up sooner rather than later.


Related

5 new Gemini AI features that could change your life

Now all Google has to do is actually deliver

Latest article