Google DeepMind Unveils Gemma 4: Open Model Revolutionizes Reasoning and Agentic Workflows
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While the hype surrounding open models has built steadily, Gemma 4’s release – with its demonstrable performance and permissive license – represents a substantial step towards realizing the potential for a truly open and collaborative AI ecosystem. The real test will be the pace of community adaptation and the creation of practical applications, but this launch significantly elevates the stakes in the open-vs-closed AI debate.
Article Summary
Google DeepMind’s Gemma 4 represents a major step forward in open AI model development, aiming to democratize access to advanced reasoning capabilities. The models are engineered for agentic workflows – enabling autonomous agents to interact with tools and APIs – and showcase substantial improvements in benchmarks across multi-step planning, math, and instruction-following. Released in four sizes (E2B, E4B, 26B MoE, and 31B Dense), Gemma 4 delivers performance comparable to much larger models, particularly on industry-standard benchmarks. A key differentiator is the commercially permissive Apache 2.0 license, designed to foster widespread adoption and experimentation. The models are designed to run efficiently on diverse hardware, from consumer GPUs to mobile devices, and come with native support for over 140 languages and a 128K/256K context window. Google is actively supporting a wide ecosystem of tools and platforms, including Hugging Face, vLLM, and NVIDIA’s NIM, streamlining deployment across various environments. This release signifies a clear effort to move beyond proprietary AI, prioritizing open collaboration and innovation.Key Points
- Gemma 4 is a new family of open models developed by Google DeepMind, boasting improved reasoning and agentic workflow support.
- The models are available under a commercially permissive Apache 2.0 license, offering greater flexibility and control to developers.
- Gemma 4 comes in four sizes – E2B, E4B, 26B MoE, and 31B Dense – targeting diverse hardware and use cases.
- The models achieve performance comparable to much larger models on key benchmarks, particularly in reasoning and agentic workflows.

