Friday, November 22, 2024

Analysis: Google, Amazon, Microsoft fight for AI dominance

Must read

The frontline battlefield for market dominance between Google Cloud Platform (GCP), Amazon Web Services (AWS) and Microsoft Azure has largely shifted from the cloud to generative AI (GenAI), as each seeks to deliver new large language models (LLM) for a growing range of use cases and devices.

Research company Omdia names GCP as the leader in cutting-edge AI technology while favouring AWS for price and operational efficiency.

Meanwhile Microsoft, which established an early leadership position through its multibillion dollar investment pioneer OpenAI, is pivoting towards custom chips, according to the research company.

During Google’s recent I/O event, CEO Sundar Pichai noted his company has invested in AI for more than a decade, which Omdia stated gives it an advantage when it comes to fundamental research.

It is worth noting that GenAI is a subset of AI. Radio Free Mobile‘s Dr Richard Windsor defines GenAI “as AI that is capable of generating content such as words, images and now, video”.

Numbers
Tracking the amount of AI-related revenue for each of the large cloud providers isn’t crystal clear, but Amazon CEO Andy Jassy stated on the company’s Q1 2024 earnings call that AWS has “accumulated a multi-billion-dollar revenue run rate already”.

AWS’ revenue increased by 17 per cent year-on-year to $25 billion. CFO Brian Olsavsky noted it is seeing strong demand in both GenAI and non-GenAI workloads. He stated customers are signing up for longer deals and making bigger commitments.

Jassy noted the lion’s share of cloud cost optimisation programmes are largely complete as enterprises are now turning their attention to new initiatives such as running GenAI services on AWS’ infrastructure.

Google Cloud revenue jumped 28 per cent to $9.5 billion in Q1, fuelled by the demand for GenAI tools that rely on cloud infrastructure, services and apps. Pichai stated the company is on track for “clear paths to AI monetisation through ads and cloud, as well as subscriptions”.

Microsoft CEO Satya Nadella noted Q3 cloud revenue, which includes Azure, surpassed $35 billion, up 23 percent. He stated the number of Azure deals over $100 million increased by more than 80 per cent while the amount of $10 million-plus deals more than doubled.

Synergy Research Group credited the impact of GenAI for boosting enterprise spending on cloud infrastructure services to more than $76 billion in Q1, up 21 per cent from a year ago.

Alex Smith, VP at Canalys, noted cloud service providers are embedding AI in their products and services to make integration “not something novel, but the norm”.

Handicapping the field
AvidThink founder and principal Roy Chua told Mobile World Live it is too early to declare a GenAI winner among the three large hyperscalers. He noted while AWS, GCP and Microsoft Azure are driving hard on the AI front “each has different yet overlapping strategies”.

He explained there are many similarities in how they approach GenAI such as embracing open-source models, providing model gardens and libraries for both open and closed models and pre-training their own foundation models (FMs) at different scales for a variety of use cases.

All three are also providing AI development platforms and pipelines for training, fine turning and building GenAI and AI powered applications.

“They are also tying in their cloud assets and data in their cloud storage and building and designing better silicon for performance and efficiency,” Chua said. “But the difference shows up in the relationships they’ve forged and the scale of investment in FMs, silicon and pipelines.”

In addition to Google’s internal AI capabilities, Chua noted it builds its own FMs in the form of the company’s Gemini family as well as its continuing investments in other projects such as Google DeepMind.

He stated while Microsoft has bet heavily on OpenAI, the tech giant innovates in parallel on its own, including the hiring of Inflection AI and Google DeepMind co-founder Mustafa Suleyman to lead its AI consumer group.

Chua said Microsoft employs a hedging strategy with other foundation model providers such as Mistral and open-source models like Llama from Meta Platforms, or Falcon from Abu Dabi based-Technology Innovation Institute 

“They’ve also demonstrated and open-sourced some very impressive models recently like Phi-3,” he stated. “Phi-3 mini in particular has remarkable performance despite its relatively small size”.

Along the same lines, he noted AWS has made a major investment in Anthropic, and many of its premier use cases involve Anthropic’s Claude models.

At the same time, he explained AWS’ Bedrock FMs promote multiple proprietary and open-source models, and a model-agnostic approach. It focuses on the workflow, platform, and data capabilities of the AWS cloud for training, fine-turning and model use.

“AWS is also building augmentation capabilities, providing function calling and guardrail mechanisms around its AI pipelines,” Chua noted.

On top of that, he stated AWS highlights improved cost and performance as well as better power efficiency for its custom silicon in its Trainium and Inferentia chips for AI workloads.

Chua told MWL there are rumours that AWS is training a new FM internally codenamed Olympus.

Trending now
Omdia stated hyperscalers are crucial providers of computing services to most of the AI industry and are most likely the first point of contact for those establishing a model inference infrastructure to serve customers.

Jassy stated while the current focus is on training AI models, once those models go into production revenue will shift to interference AI models.

“You spend much more in inference than you do in training because you train only periodically, but you’re spinning out predictions and inferences all the time,” he stated. “We also see quite a few companies that are building their generative AI applications to do inference on top of AWS.”

While Omdia predicts inference will account for a growing share of overall AI computing demand, it stated both Google and Microsoft are considerably behind AWS in CPU inference.

There’s also a move afoot to transition from monolithic LLMs to custom models.

Gartner forecast more than 50 per cent of the GenAI models that enterprises use will be specific to either an industry or business function by 2027, up from approximately 1 per cent in 2023.

Bubble to burst?
Radio Free’s Windsor stated in a research note causality is by far the biggest limitation of GenAI “as these models have no causal understanding of what it is that they are doing”.

He noted this lack of understanding leads to LLMs inventing facts, making simple mistakes and remaining “unaware that they are doing so”.

“There is little doubt that valuations and expectations are too high,” Windsor explained.

He stated the flood of capital into GenAI is driven by intense public excitement, but the vast majority of LLMs are roughly equivalent in terms of performance, “meaning competition and falling prices”.

All of which, he predicts, will “trigger the reset to reality”.

Latest article