Speaker 1: One of the biggest announcements at Google IO this year was Project Astra. This is Google’s vision of a multimodal assistant that can do so many different things, and I’m here on the ground at Google io to give you a demo. Walk through it myself for the very first time and see everything that it can do. Let’s try it. Headset on. It’s very loud in here, so, so it can hear me properly. I’ve got a couple of different modes up here. We’ve got storyteller, Pictionary, alliteration, and freeform. I’m going to experiment with Gemini’s storytelling [00:00:30] abilities, as it says. And I’ve got a whole range of different objects that I can play with all on the shelf here, as well as a couple of different photos. And as you can see as I’m talking, it’s actually transcribing everything that I say. Alright, this is my dog. My dog is called Monty. And this is a cat. This is a cat called Cat Harry,
Speaker 2: A second portal. This term revealing the [00:01:00] face, dazing from forever trapped in a summer, tilting his head, curiosity at the eternally. B, somewhat warmth, narrow his eyes from break with this new challenge he seen, or this new territory.
Speaker 1: So it’s just made up a whole story about these, this dog and this cat. [00:01:30] Okay, so I’m going to remove Monty and say, Monty’s gone away. Now, where did Monty go?
Speaker 2: The field in the photograph stands empty. Monty having wander off to explore beyond the train or distance on a summer breeze
Speaker 1: Sounds like a pretty good life. Okay, that’s the storyteller demo. Alright, let’s try Pictionary now. This is another demo. [00:02:00] You get to see my terrible drawing skills, so let’s draw. Oh, what did you say, Gemini?
Speaker 2: What did you I said I Sure. Your drawing skills. I’m that bad. Are you ready?
Speaker 1: I’m ready. Okay. So as you could hear that, I could interrupt Gemini and it would pause and then respond and pick up to me. That was pretty cool. It’s almost like I’m talking with a real person. Okay, let’s draw. Oh my goodness. Put me on the spot here.
Speaker 2: [00:02:30] It’s neat to see the system respond that way, isn’t it?
Speaker 1: It is pretty cool.
Speaker 2: Is it a palm tree?
Speaker 1: How did you guess? That’s actually pretty good that it guessed that it was a palm tree. I mean, I’m a terrible drawer and that’s exactly what I was going for, even though the trunk is red. Okay. That’s pretty nice. I like that. Let’s try it one more quickly. Let’s try freeform. Let’s just go with the flow. Okay. What is [00:03:00] this? What is this?
Speaker 2: That looks like a lump of brown. Is it perhaps a baggette?
Speaker 1: I think it is a baguette. If I was going to make a recipe with that and this and this and maybe this, what would you suggest?
Speaker 2: Like apples,
Speaker 1: Something sweet.
Speaker 2: Would anything [00:03:30] perhaps you could make a bad? Would you like some suggestions?
Speaker 1: Yes. Just quickly,
Speaker 2: You can make a classic bread bag and add for unique flavor. Not bad. Toss it with butter and sugar and bake it for simple and delicious too.
Speaker 1: Okay, thanks Gemini. So that’s your first look at Project Astra. It’s felt really, really natural to be able to converse with it like [00:04:00] this and do all of these different tasks. I see a lot of promise in where this is going. This was really natural, wild, and I feel like it is going to blow my mind even more the more I think about it, rather than just doing a quick demo for you here. Yeah, I’m excited to see where this goes. Thanks for watching cnet. Make sure to check out full Google IO coverage right now.