Just last week, Google rolled out Gemini 2.0 Flash to all Gemini app users on mobile and desktop. Building on that momentum, Google is now expanding access and adding to its Gemini model offerings.
Gemini 2.0 Pro is here for Advanced users
On Wednesday, Google announced it is finally releasing an experimental version of Gemini 2.0 Pro. The model, which has Google’s largest context window in a model of 2 million tokens, is the company’s most robust and advanced model for coding and complex prompts.
Also: You could win $1 million by asking Perplexity a question during the Super Bowl
The large context window allows the model to analyze and reference a robust amount of information at once, improving overall performance and assistance by including additional context. It also enables the model to call on tools such as code execution, making it more suitable for a variety of tasks.
Furthermore, Gemini 2.0 Pro, the successor to 1.5 Pro unveiled a year ago, outperformed the rest of Google’s Gemini models on a series of benchmarks, including the MMLU-Pro, which tests for general capabilities; GPQA (diamond), which tests for reasoning; LiveCodeBench (v5), which tests for code generation in Python; and MATH, which tests for challenging math problems in algebra, geometry, pre-calculus, and more.
Also: Google Gemini’s lock screen update is a game-changer for my phone
This model is available as an experimental offering for Gemini Advanced users in the drop-down toggle. To be Gemini Advanced user, you need to enroll in the Google One AI Premium plan that costs $20 per month. Of course, Google couldn’t forget its developer user base, also offering it to developers via Google AI Studio and Vertex AI.
Introducing Gemini 2.0 Flash-Lite
Beyond that, Gemini 2.0 Flash, Google’s model for faster responses and stronger performance ideal for high-volume and high-frequency tasks at scale, is becoming available in more Google products, including the Gemini API in Google AI Studio and Vertex AI, in addition to Gemini app access, which was announced last week.
Also: Perplexity lets you try DeepSeek R1 – without the security risk
Lastly, due to the positive feedback Google received on Gemini 2.0 Flash, it has now introduced a new model, 2.0 Flash-Lite. This model retains the 1 million token context window, multimodal input, speed, and cost of 2.0 Flash, while offering better quality, according to Google. This model is available in Google AI Studio and Vertex AI in public preview.
Also: Gemini’s Deep Research browses the web for you – try the Android app now for free
Google addressed safety concerns, reassuring the public that the models were built using techniques designed to enable safe usage, such as new reinforcement learning techniques. The company also shared that the models went through automated red-teaming to assess security risks. This announcement follows Google’s publication of its Responsible AI: Our 2024 report, published yesterday.