Google ships Gemma 4 under Apache 2.0 for agentic workflows and reasoning
TL;DR
Google released Gemma 4 on April 2, 2026 under Apache 2.0. The family is purpose-built for reasoning and agentic workflows, continuing Google's open-source AI series alongside the proprietary Gemini line. Available on Google Cloud with the Agent Development Kit for framework integration.
What shipped
Google released Gemma 4 on April 2, 2026. Three framing points from Google's release:
- Apache 2.0 license. Most permissive common open-source license. Commercial use, modification, redistribution all permitted.
- Reasoning and agentic workflows as the design target. Previous Gemma releases optimised for general instruction following; Gemma 4 is explicitly tuned for agent-style tool-use sequences.
- Google Cloud availability from day one, with the Agent Development Kit (ADK) providing a framework for multi-step agent development.
The Gemma-vs-Gemini split
Google runs two parallel model families:
- •Gemini proprietary, hosted-only, frontier performance. Gemini 3.1 Pro is the current flagship.
- •Gemma open-weight under Apache 2.0, sized for self-hosting and research.
Gemma 4 is positioned as the open sibling. Not competing with Gemini 3.1 Pro on absolute capability; competing with Llama 4 (Meta, since pivoted to Muse Spark closed), Mistral Large 3, and the Chinese open-weight cohort (DeepSeek V4, Qwen 3.x, GLM-5.1).
Why Apache 2.0 matters
Apache 2.0 is the same license used across most of the software industry. Meta's Llama series historically used a "community license" with commercial restrictions above a certain user threshold, creating licensing uncertainty for large deployments. Apache 2.0 removes that ambiguity entirely. If your company's legal team has blocked Llama over licensing, Gemma 4 on Apache 2.0 is the unblocked alternative.
Agentic use case fit
Google positions Gemma 4 as the open model for agentic workflows specifically. The ADK integration provides building blocks for multi-step agent patterns. Cross-referencing Google's other April 2026 moves (Deep Research Max on Gemini 3.1 Pro, Workspace in NotebookLM), Google is clearly pursuing agents as the central product thesis across both closed and open tiers.
Distribution
- •Google Cloud with Agent Development Kit (ADK).
- •Direct model weights download for self-hosting.
- •Hugging Face for community use.
For developers
If you were already running Gemma 3 variants, Gemma 4 is a drop-in upgrade; the API surface is stable. If you run agents against Llama 4 or Mistral Large 3, benchmark Gemma 4 on your actual task load before deciding, since Google's reasoning-and-agentic tuning produces different results than general-purpose open models.
Who this matters for
- Vibe Builder: Apache 2.0 open model from Google, tuned for agent workflows. If your work uses multi-step tool calls, this is a credible open alternative to Claude Opus.
- Basic User: Available on Google Cloud if you already have an account. Gemini Pro remains the hosted consumer choice; Gemma 4 is for when you need to run the model yourself.
- Developer: Apache 2.0 license clears enterprise legal blockers that Llama's community license created. ADK framework on Google Cloud shortens time-to-production agent.
What to watch next
Gemma 4 is Google admitting what most people already knew: the open-weight market needs a serious participant from the major labs, and Meta's Llama 4 retreat plus the pivot to closed-Muse Spark left a real gap. Apache 2.0 closes that gap with a license that enterprise legal teams can actually approve.
The agentic-tuning framing is the interesting strategic choice. Rather than releasing a general-purpose open model that competes point-for-point with Llama and Qwen, Google is carving out "best open model for agent workflows" as its positioning. Whether that holds depends on how the benchmark numbers compare to DeepSeek V4 and GLM-5.1 on actual agent tasks.
For the open-source movement broadly, Gemma 4 under Apache 2.0 from Google is a durability signal. One of the frontier labs is still committed to open distribution. Meta is not (Muse Spark is closed). Alibaba is not (Qwen 3.6-Max is closed). That narrows the Western open-weight contenders to Google (Gemma), Mistral (partial), and the Chinese cohort (Moonshot, Z.AI, DeepSeek). Gemma 4 keeps Google in that category.
The Agent Development Kit integration is the sleeper feature. Most open-model releases ship without a serious framework; you get weights and a README. Google shipping ADK alongside the weights means the path from "download model" to "production agent" is shorter. For small teams without framework engineering capacity, this is a real acceleration.
Size variants are the practical question. Google published multiple Gemma 4 sizes (detail in release notes) so you can fit the model to your deployment hardware. Any team running on single-GPU inference should benchmark the smallest variant on their task before committing to the flagship.
by Harsh Desai
About gemini
View the full gemini page →All gemini updatesMore from gemini
- IntegrationGoogle rolls out Gemini in Chrome to seven APAC countries
Gemini in Chrome expanded to seven Asia-Pacific markets: Australia, Indonesia, Japan, South Korea, Philippines, Taiwan, and Thailand. The rollout puts Google's in-browser AI assistant directly in the hands of hundreds of millions more users.
- FeatureDeep Research Max: a step change for autonomous research agents
Introducing Deep Research and Deep Research Max, the next generation of Google’s autonomous research agents.
- FeatureMake chats more natural and efficient with Continued Conversation, now in Gemini for Home
Since early access launched, millions of you have opted in to help shape Gemini for Home through your daily use and feedback. Today, we’re delivering on one of your top …