Summary
- The new Siri sets the bar for voice assistants, with impressive in-app actions requiring only voice commands, surpassing Google Gemini.
- Apple’s on-device AI processing prioritizes privacy and efficiency, partnering with OpenAI for advanced models like GPT-4o with user consent.
- Google must step up with in-app actions to keep up with Siri, while Rabbit R1 faces challenges in adapting user behavior and lacking personalization.
The biggest announcement at this year’s WWDC stage (after that calculator app for the iPad, of course) was Apple Intelligence — the company’s play on AI. Apple announced a whole suite of AI features, but perhaps the best thing to come out of it was Siri improvements. Siri is often said to be the worst voice assistant amongst its peers. But the generative AI makeover it has just received may propel it far ahead of Google Gemini while leaving dedicated AI hardware like the Rabbit R1 in the dust.
The new Siri is how voice assistants should be
I can’t believe I just said that
Frankly, a lot has been said about artificial intelligence and its potential in the last year or so, but little has been done to show its real-world application beyond a smarter chatbot. I expected Apple to figure out more practical use cases and get that messaging right on the stage, and it really delivered on those fronts — except for those creepy AI-generated cartoonish stickers of your contacts. In fact, Apple went a step ahead to show exactly where the real potential of on-device AI lies.
The next logical leap for smart assistants living on your phone is to do stuff on your behalf inside apps. If that sounds familiar, then you aren’t wrong. The Rabbit R1 claims to do exactly that, albeit with a limited set of apps currently. I wanted Google to nail that with Gemini since it has already got everything ready to deploy, just the fragmented pieces need to be put together. But when we were expecting Apple to bring Siri up to speed with Google Assistant, it instead made Siri the benchmark for in-app actions that require nothing more than voice input uttered in your natural language.
In a demo video, Apple showed how the new Siri understands the context of whatever is on your phone, thanks to its on-screen awareness, to help you do things like summarizing long articles. That is not a big deal since Gemini can already do that to an extent, but what actually caught my attention was what it did within the Photos app.
You’ll be able to ask Siri to edit a photo, and it will do that without you needing to fiddle with the editing tools. And you don’t have to specifically tell it to set the contrast or tweak the highlights, which many users may not be familiar with. All you need is to simply ask it to make it pop, as shown in the demo, and it’ll understand what you want and get it done. While some Gemini features are coming to Google Photos soon, these are mainly discovery and curation tools, making Siri’s cross-app functions like attaching a photo to an email draft with a quick voice command look far more advanced.
5 new Gemini AI features that could change your life
Now all Google has to do is actually deliver
Siri could eat Gemini and Rabbit for lunch
Rabbit wants to make apps a thing of the past. What you get on the R1 is rather a simplistic UI, while all the processing happens on Rabbit’s servers where it interacts with apps on your behalf. That is close to the intended future of AI, though Rabbit’s approach is bound to fail mainly for two reasons.
For one, it tries to recondition how we use our smartphones today, which is easier said than done as changing habits isn’t usually taken well. There is a workable middle ground where you can use your apps as usual while also having the option to delegate any task to AI when you don’t want to do it manually. Secondly, the Rabbit R1 exists in an ecosystem of its own, whereas AI actually thrives when it has a lot of information on you. That means Rabbit just can’t personalize its responses in a way your current smartphone can. That gives a big leg up to existing ecosystems — namely Apple and Google.
Source: Rabbit
Both tech giants are primed to make in-app interactions via AI a reality. It just so happens that Apple got the jump on this handy feature with its new, supercharged Siri. In last month’s I/O keynote, Google demoed advances made to Gemini, including deeper integrations, Project Astra for real-time visual searches, and those dreaded AI Overviews in Search. While Google did mention AI doing stuff on your behalf, it had nothing to show on the stage. And just a month later, we have Apple forging well ahead of Google in an area the latter has always had a stronghold.
A lot to like
And be concerned about, too
As you’d expect, Apple leaned heavily toward privacy. Most of the AI processing happens on the device (hence limited device support), while the queries sent to Apple’s cloud are claimed to be as private as those on your phone. Apple is using its own generative and language models specifically trained for these tasks, making them far more efficient with the limited resources of a smartphone. On top of that, Apple’s partnership with OpenAI allows access to the latest GPT-4o model, with Siri taking your explicit permission before forwarding anything to OpenAI.
However, these flashy, pre-recorded demos only show one side of the story. It’s hard to tell how well these features will work in the real world, and it doesn’t help the cause that many of Siri’s AI tricks will only be rolled out over the next year. Besides that, Siri demos only showed first-party apps for in-app actions. While Apple did mention App Intent API to let in other developers, it remains to be seen if the actual implementation meets a rocky start or not.
The Rabbit R1 is a great garage project that begs to be acquired
If it’s just an app, why shouldn’t your current phone have it?
This entire Siri saga nevertheless should provide Google with some food for thought. It was flattering that Apple ripped off Android’s Material You icon themes, so it’s only fair for Google to emulate Siri’s in-app actions. Otherwise, Gemini will once again be left behind the curve, which I’m sure Google executives scratching their heads ever since WWDC24 wouldn’t want. Meanwhile, Rabbit, as harsh as it may sound, should start counting its days unless it wants itself to be saved by one of the big cats in the jungle, which, as you can guess, wouldn’t be a fairy tale either.