If you want to run Google DeepMind’s Gemma AI models, like Gemma 3 or Gemma 3n, locally on a Windows 11 computer. This guide will take you through the steps to complete the setup using Ollama. An open-source tool that allows you to run large language models directly on your system. Interestingly, you can use the same process for quite a few other AI.
Takeaways:
- Learn how to install Google Gemma 3 or 3n AI Model on Windows 11
- How do you install Gemma 3 or 3n AI Model on Windows 11
Table of Contents
About Gemma 3 and Gemma 3n
Gemma 3 and 3n are part of Google's lightweight, efficient AI model family.
- Gemma 3 is designed for general-purpose language tasks and optimised for performance.
- Gemma 3n is tailored for high efficiency on mobile, edge, and resource-limited devices. It also supports audio input processing.
These models are part of Google's efforts to make AI more accessible and efficient. Their focus on performance and flexibility makes them a good fit for both personal use and integration into custom applications. If you're developing a lightweight assistant, chatbot, or embedded AI system, Gemma 3n is a good option, while Gemma 3 works better for more generalised tasks.
First Install Ollama on Windows 11
- Open Start and search for Command Prompt or Terminal.
- Right-click the result and choose Run as administrator.
- To install Ollama, enter the following command:
winget install --id Ollama.Ollama

Tip: To uninstall Ollama, use this command: winget uninstall --id Ollama.Ollama
Related: How to Run DeepSeek R1 Locally - Install DeepSeek AI to Run Offline
How to Install Google Gemma 3 or 3n AI Model on Windows 11
- Open Command Prompt or Terminal as administrator.
- To install the Gemma 3 model with 1 billion parameters, run:
ollama pull gemma3:1b
- You can replace gemma3:1b with another version such as gemma3:4b, gemma3:12b, or gemma3:27b.
- To install the Gemma 3n model, use the command:
ollama pull gemma3n:e2b
- You can substitute gemma3n:e2b with versions like gemma3n:e4b
- To list installed models, run: ollama list
- To remove a specific model, use: ollama rm gemma3:1b
- To run a model locally, enter: ollama run gemma3:1b
- Replace gemma3:1b with the model you installed
To display configuration and parameters for a specific model, use: ollama show gemma3:1b
Once you've completed these steps, you can use the Google Gemma models directly from your local machine through the command-line interface, without relying on the cloud.