Skip to content

Ollama (Local)

The 100% local mode: no data ever leaves your machine.

Prerequisites

  1. Install Ollama from ollama.com
  2. Download a model:
    bash
    ollama pull llama3

Launch

bash
OLLAMA_ORIGINS="chrome-extension://*" ollama serve

The OLLAMA_ORIGINS variable allows requests from the Chrome extension (CORS).

Method 2: Permanent variable (macOS)

bash
launchctl setenv OLLAMA_ORIGINS "chrome-extension://*"

Then launch the Ollama application normally.

Important

If you launch Ollama via the macOS app without the OLLAMA_ORIGINS variable, the extension will receive a 403 error (CORS blocked).

Configuration in the extension

  1. Select Ollama in the popup
  2. Verify the URL: http://localhost:11434
  3. Click Save

Technical details

ParameterValue
Default modelllama3
Endpointhttp://localhost:11434/api/chat
AuthenticationNone
StreamDisabled
ModelSizeRAM requiredQuality
llama38B8 GBVery good
gemma2:9b9B8 GBVery good
llama3.2:1b1.2B2 GBDecent
gemma2:2b2.6B4 GBGood

To install a model:

bash
ollama pull llama3
ollama pull gemma2:9b

Advantages

  • Free: no API cost
  • Private: no data sent online
  • Offline: works without internet
  • Customizable: use any compatible model

Troubleshooting

"Ollama is not accessible" error

  • Check that Ollama is running: curl http://localhost:11434/api/tags
  • Check CORS: curl -I -H "Origin: chrome-extension://test" http://localhost:11434/api/tags
  • If 403: relaunch with OLLAMA_ORIGINS="chrome-extension://*"

"model not found" error

  • List your models: ollama list
  • Install the missing model: ollama pull llama3

Slow responses

  • Larger models (8B+) require more RAM and time
  • Try a lighter model (llama3.2:1b, gemma2:2b)
  • On Mac with Apple Silicon chip, performance is better

Tab Manager Pro — Organize your tabs with AI