Friday, November 22, 2024

AWS, Google and Other Cloud Giants Go After AI Agents

Must read

At the AWS New York Summit this week, the cloud provider announced that AI agents built through Amazon Bedrock would have enhanced memory and code interpretation capabilities. AWS’ AI and data vice president, Swami Sivasubramanian, said that this was part of a larger update to AWS’ overall GenAI stack available to their enterprise customers.

“At the top layer, which includes generative AI-powered applications, we have Amazon Q, the most capable generative AI-powered assistant. The middle layer has Amazon Bedrock, which provides tools to easily and rapidly build, deploy, and scale generative AI applications leveraging LLMs and other foundation models (FMs).

“And at the bottom, there’s our resilient, cost-effective infrastructure layer, which includes chips purpose-built for AI, as well as Amazon SageMaker to build and run FMs,” he said.

Now, agents built using Bedrock with improved capabilities are interesting, as they would be able to carry out multistep, complex tasks like automating the processing of insurance claims or booking flights for the business with prior knowledge of preferences.

As mentioned before, these agents now also have code interpretation abilities, which means they can generate and run code when the LLM deems it necessary, “significantly expanding the use cases they can address, including complex tasks such as data analysis, visualisation, text processing, equation solving, and optimisation problems”, the company said on the update. 

Despite these updates, AWS still seems to be slightly behind, as Azure also announced similar capabilities for enterprise AI agent building in April and GCP prior to that, though memory retention is not as seamless for agents built on GCP’s Vertex AI.

However, these rapid updates coming from the top three biggest cloud providers in the industry mean one thing: the next wave in the AI revolution is already underway.

Cloud > Generative AI > Building Agents

After the initial panic for businesses to move towards cloud-based systems to avoid going under, companies have quickly grown wise to how these systems can be leveraged to get the most out of their data.

Long story short, cloud providers identified these needs, deploying all-encompassing generative AI capabilities for their customers. As Sivasubramanian said, “They need a full range of capabilities to build and scale generative AI applications tailored to their business and use case.”

Now, the shift towards focusing on building AI agents and improving their overall capabilities signifies a larger need to seamlessly connect all of these services under an easy-to-use interface for employees.

The entire point of deploying generative AI for enterprises is to ease the order of operations within a business. The focus on agents is particularly important as companies rely on the ability to customise and finetune their AI to fit their specific and industry-relevant needs especially as agents have the malleability to execute varied tasks depending on the ask.

During Google Cloud Next 2024, CEO Thomas Kurien said, “Many customers and partners are building increasingly sophisticated AI agents.” With AI agents becoming all the rage, improving their capabilities has become a priority, which explains the slew of updates to agent-building capabilities in the last year alone.

What Can They Improve On?

These updates signify pretty exciting possibilities for what AI agents can do in the future. As is already the case, AI agents are a step towards achieving AGI. Whether that be in the near future or years away, agents still manage to reflect the best in terms of AI innovations. 

With AWS’ recent announcement, they specify that the agent’s code interpretation capabilities are only used when the LLM deems it necessary. Though this limits how these capabilities are used, as it’s not up to the user, it also marks a form of semi-autonomy.

However, fully autonomous AI agents are far from close. “I think it’s going to require not just one but two orders of magnitude more computation to train the models,” said Microsoft AI chief Mustafa Suleyman.

Nevertheless, the enterprise focus on seamless operations and better customer experiences means that agent capabilities will continue to expand, potentially allowing them to act and execute tasks autonomously to produce relevant and digestible outputs for the company’s employees.

As Sivasubramanian has said of AWS, “We’re energised by the progress our customers have already made in making generative AI a reality for their organisations and will continue to innovate on their behalf.”

This seems to be the sentiment across the board for both GenAI and cloud providers, as many industry stalwarts, including Andrew Ng, Andrej Karpathy and Vinod Khosla, have voiced a need for more education around and funding in agent research.

Latest article