Files
oam/knowledge base/ai/ollama.md
2026-02-04 00:12:35 +01:00

1.6 KiB

Ollama

The easiest way to get up and running with large language models.

  1. TL;DR
  2. Further readings
    1. Sources

TL;DR

Setup
brew install --cask 'ollama-app'  # or just brew install 'ollama'

Cloud models are automatically offloaded to Ollama's cloud service.
This allows to keep using one's local tools while running larger models that wouldn't fit on a personal computer.
Those models are usually tagged with the cloud suffix.

Usage
# Download models.
ollama pull 'qwen2.5-coder:7b'
ollama pull 'glm-4.7:cloud'

# List pulled models.
ollama list

# Run models.
ollama run 'gemma3'

# Quickly set up a coding tool with Ollama models.
ollama launch

# Launch models.
ollama launch 'claude' --model 'glm-4.7-flash'

# Only configure model, without launching them.
ollama launch 'claude' --config

# Sign into Ollama cloud, or create a new account.
ollama signin

Further readings

Sources