Saturday, November 23, 2024

Google Demonstrates How to Build and Deploy GenAI Agents

Must read

Vertex AI is Google Cloud’s Generative AI (GenAI) platform.

The platform helps businesses build bot-driven experiences with “enterprise-ready” GenAI.

To do so, Vertex AI has a three-layer stack.

At the bottom is a Model Garden, where enterprises can plug in their chosen LLM.

Above, there is a Model Builder, where businesses can tune, monitor, and optimize the model.

Lastly, at the top is an Agent Builder, which enables developers to build custom AI Agents – leveraging extensions, connectors, document processors, etc.

That’s a high-level overview; more insight is available in the screengrab below.

Yet, most notably, in leveraging the Agent layer of the Vertex AI Platform, businesses can orchestrate conversations and transactions using natural language prompts.

That’s the no-code UI option, which offers an out-of-the-box experience for non-developers to deploy GenAI Agents much quicker than via traditional methods.

The other option is a code-first implementation, in which developers customize pre-built apps and components with managed orchestration tools like Langchain or LlamaIndex.

Either way, as Kiran Bellare, Head of Products at Google Cloud, stated:

Agents preserve conversation context, connect to various tools and applications, summarize and ground responses, route between different agents, and support complex reasoning.

Bellare showcased how some of these features work while presenting a demo of how to build and deploy GenAI Agents for contact centers and beyond – as shared below.

In doing so, he shared the four cornerstone use cases for GenAI Agents:

  1. Customer Agents: These support customers in troubleshooting, ordering products, managing orders, etc.
  2. Employee Agents: These assist with various functional roles like HR, sales, accounts payable, and supply chain.
  3. Knowledge Agents: These access specialized information in document repositories.
  4. Voice Agents: These can support voice self-service in contact centers and more specialized use cases, such as food ordering in drive-throughs and in automobiles.

Google has offered AI Agents to support these use cases since 2019, starting with Dialogflow, which offered a solution based on intents, webhooks, and flows.

Five years later, the tech giant’s Agent console has evolved significantly with the addition of GenAI, hybrid agents, and more.

Yet, Google has teased that it still lots more to come.

The Roadmap for Google’s GenAI Agents

In 2023, Google added a no-code console, multilingual support, integration with the latest LLMs, Gemini, and various tool extensions to its Vertex Platform.

Now, the tech giant has significant release plans before the end of the year. Indeed, Bellare stated:

Looking ahead, we’re focusing on supporting custom models, additional voice features, and multimodal capabilities where images can be uploaded and used in conversations.

Those multimodal capabilities are meaningful, as Google customers may soon orchestrate and automate experiences that include sharing and verifying images.

Meanwhile, the rumor mill suggests that Google will soon release real-time translation tools for voice. That’s particularly significant for contact center AI Agents.

In addition, Google promises to support more RAG document types, 1p & 3p extensions, and expanded support across more regions.

Thankfully, as these capabilities come to market, Google’s Agent Console will continue to provide end-to-end lifecycle management for agents that cover the “entire development journey.”

According to Bellare, that includes defining the agent architecture, setting goals and instructions, running simulations, adding and curating examples, deploying agents, and monitoring and analyzing conversations for further tuning.

 

 

Latest article