Google Launches Gemma 4: Open-Weights Model with Massive Context and Strong Coding Ability.
7
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
High hype driven by the 'Google' brand and the large context window, matched with a genuinely high impact score due to the superior licensing and strong, benchmarked performance uplift.
Article Summary
Google DeepMind has launched Gemma 4, an expanded family of open-weights models, featuring models ranging from E2B to a 31B dense architecture. The 31B variant boasts a massive 256K context window and implements a hybrid attention mechanism for memory efficiency. Licensing has improved significantly with the shift to Apache 2.0, facilitating broader commercial use. Benchmarks show strong performance, particularly the 31B model's Codeforces ELO of 2150, and the smaller E2B model demonstrating competitive capabilities against older generation models. Furthermore, the family supports multimodality—handling text, images, and, for the smaller versions, audio input.Key Points
- The 31B model supports a massive 256K token context window via a hybrid attention mechanism, enabling long-context tasks.
- The move to an Apache 2.0 license for the 31B variant significantly reduces commercial friction compared to previous custom Google licenses.
- Smaller models (E2B/E4B) maintain high performance and now support audio input, enabling more comprehensive single-model pipelines.

