Friday, August 2, 2024
HomeSoftware DevelopmentGemma 2 will get three new merchandise: a 2B parameter model, ShieldGemma,...

Gemma 2 will get three new merchandise: a 2B parameter model, ShieldGemma, and Gemma Scope


Google has introduced three new merchandise which can be a part of the Gemma 2 household, a sequence of open AI fashions that had been launched in June.  The brand new choices embody Gemma 2 2B, ShieldGemma, and Gemma Scope. 

Gemma 2 2B is a 2 billion parameter choice, becoming a member of the present 27 billion and 9 billion parameter sizes. In line with Google, this new dimension balances efficiency with effectivity, and might outperform different fashions in its class, together with all GPT-3.5 fashions. 

It’s optimized with the NVIDIA TensorRT-LLM library and is on the market as an NVIDIA NIM, making it ideally suited for quite a lot of deployment varieties, equivalent to information facilities, cloud, native workstations, PCs, and edge units. Gemma 2 2B additionally integrates with Keras, JAX, Hugging Face, NVIDIA NeMo, Ollama, and Gemma.cpp, and can quickly combine with MediaPipe as effectively. 

And due to its small dimension, it could run on the free tier of T4 GPUs in Google Colab, which Google believes will make “experimentation and growth simpler than ever.”

It’s obtainable now through Kaggle, Hugging Face, or Vertex AI Mannequin Backyard, and can be utilized inside Google AI Studio.  

Subsequent, ShieldGemma is a sequence of security classifiers for detecting dangerous content material in mannequin inputs and outputs. It particularly targets hate speech, harassment, sexually specific content material, and harmful content material. The ShieldGemma fashions are open and designed to allow collaboration and transparency within the AI growth neighborhood, and add to the present suite of security classifiers within the firm’s Accountable AI Toolkit.  

It’s obtainable in numerous mannequin sizes to fulfill totally different wants. For instance, the 2B mannequin is right for on-line classification, whereas the 9B and 27B can present higher efficiency for offline eventualities the place latency isn’t a priority. In line with Google, all mannequin sizes use NVIDIA pace optimizations to enhance efficiency. 

And eventually, Gemma Scope offers higher transparency into how Gemma 2 fashions come to their selections, it could allow researchers to grasp how Gemma 2 identifies patterns, processes data, and makes predictions. It makes use of sparse autoencoders (SAEs) to take a look at particular factors within the mannequin and “unpack the dense, complicated data processed by Gemma 2, increasing it right into a kind that’s simpler to investigate and perceive,” Google defined in a weblog submit.   

“These releases signify our ongoing dedication to offering the AI neighborhood with the instruments and assets wanted to construct a future the place AI advantages everybody. We consider that open entry, transparency, and collaboration are important for growing protected and useful AI,” Google wrote. 


You might also like…

Google releases Gemma, a brand new AI mannequin designed with AI researchers in thoughts

RAG is the subsequent thrilling development for LLMs



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments