Google has introduced a new capability in search it calls “AI Mode.”
Still experimental, AI Mode is AI Overviews on steroids. It performs more advanced reasoning and thinking, and it has multimodal capabilities, meaning it can understand other types of content besides text.
Users can ask follow-up questions in AI Mode, the company said.
“AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning,” Robby Stein, vice president of product at Google, wrote in a Wednesday (March 5) blog post.
“You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more,” he added.
AI Mode will be the first tab to the left of categories users typically see when starting a Google search: All, News, Images, Videos, Short Videos, Shopping and others.
Using AI Mode essentially brings the user into an AI chatbot-like conversation, but adds real-time search information, shopping and other sources of information.
It uses a “query fan-out” technique that does multiple related searches “concurrently across subtopics and multiple data sources and then brings those results together to provide an easy-to-understand response,” he said. “This approach helps you access more breadth and depth of information than a traditional search on Google.”
Google said the first users for AI Mode will be Google One AI Premium subscribers. Others can join the waitlist.
Read more: Google Unveils Expanded AI Overviews and Experimental AI Mode
Amazon to Launch Cheaper Reasoning Model
Amazon is reportedly working on a reasoning AI model that would compete with those from OpenAI, Google and Anthropic.
The model is expected to be cheaper to deploy than competing models, according to Tuesday (March 4) Business Insider report. This would help businesses getting ready to scale their AI agents. Amazon is also prepping the model to perform in the top five on benchmark tests.
The model would fall under its Nova family of foundation models. The company wants to make it a hybrid model — answering quickly when the task calls for it but using the slower reasoning capability for other tasks.
Other Amazon models under Nova include Nova Micro, Lite and Pro AI models, which the company said are at least 75% cheaper than third-party models in their class offered on Amazon’s Bedrock AI development platform.
Reasoning models have emerged as the next batch of AI models that can practically extend AI scaling laws. Previously, the more compute and data a large language model gets, the more capable it would become.
But the rate of advancement for raw scaling has slowed down. One answer to this slowdown is reasoning models — which go over its own answers and use chain-of-thought techniques — hence their current popularity.
OpenAI’s GPT-4o, o1 and o3 models are reasoning models, as are Google’s Gemini 2.0 Flash Thinking; Anthropic’s Claude 3.7 Sonnet, which is hybrid like Amazon’s new model; and DeepSeek’s R1 model. DeepSeek is reportedly rushing its next reasoning model, R2, to market, according to Reuters.
See also: Amazon Launches Nova AI Models for Business
Perplexity Develops ‘Unbiased’ DeepSeek Model
DeepSeek shook Silicon Valley with its inexpensive foundation AI model that only used 2,000 slower Nvidia chips but performed at par with the best models in its class.
However, anyone using DeepSeek’s models — V3 and R1 — would get censored answers when they ask about topics sensitive to the Chinese government, such as Taiwan’s status as an independent country.
“A major issue limiting R1’s utility is its refusal to respond to sensitive topics, especially those that have been censored by the Chinese Communist Party,” Perplexity AI wrote in a February blog post.
Perplexity is now open sourcing a version of DeepSeek’s reasoning R1 model that is “unbiased, accurate” and has “factual information.”
The startup calls this version, “R1-1776,” a potential reference to the year the 13 original American colonies declared their independence from Great Britain by adopting the Declaration of Independence.
Perplexity re-trained DeepSeek’s R1 on a dataset of 300 topics known to be censored in China but with the factual answers. The startup said it made sure that the model retained its top-notch performance.
Users can download the model weights on Perplexity’s HuggingFace Repo or use the model through its Sonar API.