Google is following the consumer launch of 2.0 Flash with new preview models that will be available to test in the Gemini app: 2.0 Pro Experimental and 2.0 Flash Thinking Experimental.
Gemini 2.0 Pro Experimental
In December, Google started testing 2.0 Experimental Advanced (gemini-exp-1206) alongside the Flash preview. Many assumed it would launch in the Pro family, and Google today released an updated model with Gemini 2.0 Pro Experimental (gemini-2.0-pro-exp-02-05).
Google says 2.0 Pro Experimental is its “best model yet for coding performance and complex prompts.” It also has “better understanding and reasoning of world knowledge, than any model we’ve released so far.”
For the Gemini API, there’s a 2 million token context window that “enables it to comprehensively analyze and understand vast amounts of information.” In Gemini Advanced today, you’ll get 1 million like before.
Gemini Advanced subscribers ($19.99 per month) will be able to preview 2.0 Pro Experimental on the web and mobile app. It’s rolling out starting today to the Gemini app (already live on the web), and also available for developers (Google AI Studio + Vertex AI).
Gemini 2.0 Flash Thinking Experimental
Google debuted its first thinking model in December, and updated it last month in AI Studio. Gemini 2.0 Flash Thinking Experimental will be available to test in the Gemini app for free.
Featuring the speed and performance of 2.0 Flash, Google says this “model is trained to break down prompts into a series of steps to strengthen its reasoning capabilities and deliver better responses.” You can see that in real-time in the UI:
2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model’s line of reasoning.
Meanwhile, Google is also making available a second version “2.0 Flash Thinking Experimental with apps” that can reason/“interact with apps like YouTube, Search and Google Maps.”
These connected apps already make the Gemini app a uniquely helpful AI-powered assistant, and we’re exploring how new reasoning capabilities can combine with your apps to help you do even more.
This is also rolling out now to all Gemini web and mobile app users.
Other announcements today:
Gemini 2.0 Flash has hit general availability (GA) for developers building apps and features with Google’s API. Pricing details are available here. Image and audio output capabilities, as well as the Multimodal Live API, will hit GA “in the coming months.”
The company also shared that the 2.0 family was “built with new reinforcement learning techniques that use Gemini itself to critique its responses.”
This resulted in more accurate and targeted feedback and improved the model’s ability to handle sensitive prompts, in turn.
Google today announced the cost-efficient Gemini 2.0 Flash-Lite model for developers. “Optimized for large scale text output use cases,” it is better than 1.5 Flash across a majority of benchmarks, while maintaining the speed and cost that devs have come to appreciate. It’s available in public preview through Google AI Studio and Vertex AI.
Like 2.0 Flash, it has a 1 million token context window and multimodal input. For example, it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier.
FTC: We use income earning auto affiliate links. More.