The Gemma 4 family of multimodal models by Google DeepMind is out on Hugging Face, with support for your favorite agents, inference engines, and fine-tuning libraries 🤗 (View Highlight)
These models are the real deal: truly open with Apache 2 licenses, high quality with pareto frontier arena scores, multimodal including audio, and sizes you can use everywhere including on-device. Gemma 4 builds on advances from previous families and makes them click together. In our tests with pre-release checkpoints we have been impressed by their capabilities, to the extent that we struggled to find good fine-tuning examples because they are so good out of the box. (View Highlight)
We collaborated with Google and the community to make them available everywhere: transformers, llama.cpp, MLX, WebGPU, Rust; you name it. This blog post will show you how to build with your favorite tools so let us know what you think! (View Highlight)
Similar to Gemma-3n, Gemma 4 supports image, text, and audio inputs, and generates text responses. The text decoder is based on the Gemma model with support for long context windows. The image encoder is similar to the one from Gemma 3 but with two crucial improvements: variable aspect ratios, and configurable number of image token inputs to find your sweet spot between speed, memory, and quality. All models support images (or video) and text inputs, while the small variants (E2B and E4B) support audio as well. (View Highlight)
Gemma 4 leverages several architecture components used in previous Gemma versions and other open models, and leaves out complex or inconclusive features such as Altup. The combination is a mix designed to be highly compatible across libraries and devices, that can efficiently support long context and agentic use cases, whilst being ideal for quantization. (View Highlight)
With this feature mix (and the undisclosed training data or recipe), the 31B dense model achieves an estimated LMArena score (text only) of 1452, while the 26B MoE reaches 1441 with just 4B active parameters 🤯. To put this in context, these scores are more or less the same as the recent GLM-5 or Kimi K2.5, but with ~30 times less parameters. As we’ll see, multimodal operation is comparatively as good as text generation, at least in informal and subjective tests. (View Highlight)
One of the most distinctive features in smaller Gemma 4 models is Per-Layer Embeddings (PLE), which was introduced previously in Gemma-3n. In a standard transformer, each token gets a single embedding vector at input, and the same initial representation is what the residual stream builds on across all layers, forcing the embedding to frontload everything the model might need. PLE adds a parallel, lower-dimensional conditioning pathway alongside the main residual stream. For each token, it produces a small dedicated vector for every layer by combining two signals: a token-identity component (from an embedding lookup) and a context-aware component (from a learned projection of the main embeddings). Each decoder layer then uses its corresponding vector to modulate the hidden states via a lightweight residual block after attention and feed-forward. This gives each layer its own channel to receive token-specific information only when it becomes relevant, rather than requiring everything to be packed into a single upfront embedding. Because the PLE dimension is much smaller than the main hidden size, this adds meaningful per-layer specialization at modest parameter cost. For multimodal inputs (images, audio, video), PLE is computed before soft tokens are merged into the embedding sequence — since PLE relies on token IDs that are lost once multimodal features replace the placeholders. Multimodal positions use the pad token ID, effectively receiving neutral per-layer signals. (View Highlight)
The shared KV cache is an efficiency optimization that reduces both compute and memory during inference. The last num_kv_shared_layers layers of the model don’t compute their own key and value projections. Instead, they reuse the K and V tensors from the last non-shared layer of the same attention type (sliding or full). (View Highlight)
We saw in our tests that Gemma 4 supports comprehensive multimodal capabilities out of the box. We don’t know what was the training mix, but we had success using it for tasks such as OCR, speech-to-text, object detection, or pointing. It also supports text-only and multimodal function calling, reasoning, code completion and correction. (View Highlight)
We test Gemma-4 on GUI element detection and pointing across different sizes, with the following image and text prompt: “What’s the bounding box for the “view recipe” element in the image?” (View Highlight)
With this prompt, the model natively responds in JSON format with the detected bounding boxes - no need for specific instructions or grammar-constrained generation. We found the coordinates refer to an image size of 1000x1000, relative to the input dimensions. (View Highlight)
Smaller Gemma 4 models can take in videos with audio while larger ones can take in videos without audio. While the models are not explicitly post-trained on videos, they can understand videos both with and without audio. The model is particularly strong in audios (View Highlight)
We have tested all models on captioning. All checkpoints perform very well in capturing nuances accurately in complex sceneries. Here’s the image prompt we use with text prompt “Write single detailed caption for this image.”. (View Highlight)
Gemma 4 comes with day-0 support for many open-source inference engines. We also release ONNX checkpoints that can run on many hardware backends, allowing use cases on edge devices or in browser! (View Highlight)
Gemma 4 comes with first-class transformers support from the get-go 🤗. This integration allows using the model with other libraries like bitsandbytes, PEFT and TRL. Make sure to install latest version of transformers. (View Highlight)
Going a level lower, you can load Gemma 4 using AutoModelForMultimodalLM class, especially useful for fine-tuning. The built-in chat template takes care of formatting the inputs correctly, please make sure you use it to prevent subtle mistakes when building it manually. (View Highlight)
transformers.js enables running Gemma 4 right inside browser. You can check out the model card to see text-only, image & text, audio & text inference in detail here. We also shipped a demo for you to test the model here. (View Highlight)
mlx-vlm supports TurboQuant, which delivers the same accuracy as the uncompressed baseline while using ~4x less active memory and running a lot faster end-to-end. This makes long-context inference practical on Apple Silicon without sacrificing quality. Use it like this: (View Highlight)
Gemma 4 is fully supported for fine-tuning with TRL. To celebrate, TRL has been upgraded with support for multimodal tool responses when interacting with environments, meaning models can now receive images back from tools during training, not just text. (View Highlight)
To showcase this, we’ve built an example training script where Gemma 4 learns to drive in the CARLA simulator. The model sees the road through a camera, decides what to do and learns from the outcome. After training, it consistently changes lanes to avoid pedestrians. The same approach works for any task where a model needs to see and act: robotics, web browsing, or other interactive environments. (View Highlight)
Additionally, we have prepared an example on how to fine-tune Gemma 4 with TRL on Vertex AI using SFT, to showcase how to extend the function calling capabilities, whilst freezing both the vision and audio towers. The examples include how to build a custom Docker container with latest Transformers, TRL, etc. with CUDA support on Google Cloud, and how to run it via Vertex AI Serverless Training Jobs. (View Highlight)
We worked on making sure the new models work locally with agents like openclaw, hermes, pi, and open code. All thanks to llama.cpp! Feel free to tell your users to run this to try them right away at launch. (View Highlight)
We have shipped demos for you to try different Gemma 4 models. We include demos based on transformers implementation for E4B, 26B/A4B, and dense 31B models, as well as a WebGPU demo with transformers.js 🚀 (View Highlight)
• Alternating local sliding-window and global full-context attention layers. Smaller dense models use sliding windows of 512 tokens while larger models use 1024 tokens.
• Dual RoPE configurations: standard RoPE for sliding layers, proportional RoPE for global layers, to enable longer context.
• Per-Layer Embeddings (PLE): a second embedding table that feeds a small residual signal into every decoder layer.
• Shared KV Cache: the last N layers of the model reuse key-value states from earlier layers, eliminating redundant KV projections.
• Vision encoder: uses learned 2D positions and multidimensional RoPE. Preserves the original aspect ratios and can encode images to a few different token budgets (70, 140, 280, 560, 1120).
• Audio encoder: USM-style conformer with the same base architecture as the one in Gemma-3n. (View Highlight)