Saturday, November 9, 2024

Google DeepMind launches 2B parameter Gemma 2 model

Must read

Google DeepMind announced today the release of the 2 billion (2B) parameter version of Gemma 2, the second generation of its Gemma AI models.

First launched in February this year, Gemma is a family of lightweight, text-to-text open models designed for developers and researchers — and built on the technology that powers Google Gemini.

DeepMind released Gemma 2 in June, in two different sizes: 9 billion (9B) and 27 billion (27) parameters.

The new 2B model learns from larger models through distillation and produces outsized results, DeepMind says. The company also claims that it outperforms all GPT-3.5 models on the LMSYS Chatbot Arena leaderboard.

Gemma 2 2B can run on a wide range of hardware, from laptops to edge devices and cloud deployments with Vertex AI and Google Kubernetes Engine (GKE). In addition, it’s small enough to run on the free tier of the NVIDIA T4 deep learning accelerator.

ShieldGemma and Gemma Scope

DeepMind is also introducing a set of two more additions to the model family: ShieldGemma and Gemma Scope.

ShieldGemma is a series of safety classifiers designed to detect and moderate harmful content in AI model inputs and outputs. It comes in various sizes and targets hate speech, harassment, sexually explicit content, and dangerous content.

GemmaScope focuses on transparency. The tool comprises a collection of sparse autoencoders (SAEs). These are specialised neural networks that unpack the complex inner workings of the Gemma 2 models and provide an easier-to-understand format of how they process information and make decisions.

There are over 400 freely available SAEs covering all layers of Gemma 2 2B and 9B. The aim is to enable researchers to create more transparent and reliable AI systems.

Starting today, developers and researchers can download Gemma 2 2B from Kaggle, Hugging Face, and Vertex AI Model Garden, or try it out in Google AI Studio. ShieldGemma and Gemma Scope are available here and here.

Latest article