diff --git a/knowledge base/ai/claude/claude code.md b/knowledge base/ai/claude/claude code.md
index 9910b75..fff051a 100644
--- a/knowledge base/ai/claude/claude code.md
+++ b/knowledge base/ai/claude/claude code.md
@@ -57,6 +57,9 @@ claude -c
# Resume a previous conversation
claude -r
+
+# Add MCPs
+claude mcp add --transport 'sse' 'linear-server' 'https://mcp.linear.app/sse'
```
@@ -109,7 +112,7 @@ Claude Code version: `v2.1.41`.
| llama.cpp (ollama) | 16384 | 20 GB | No | 2m 16s | No |
| llama.cpp (ollama) | 32768 | 22 GB | No | 7.12s | No |
| llama.cpp (ollama) | 65536 | 25 GB | No? (unsure) | 10.25s | Meh (minor stutters) |
-| llama.cpp (ollama) | 131072 | 33 GB | No | 3m 42s | **No** (major stutters) |
+| llama.cpp (ollama) | 131072 | 33 GB | **Yes** | 3m 42s | **No** (major stutters) |
diff --git a/knowledge base/ai/gemini/cli.md b/knowledge base/ai/gemini/cli.md
index ac9046d..7a4a040 100644
--- a/knowledge base/ai/gemini/cli.md
+++ b/knowledge base/ai/gemini/cli.md
@@ -16,7 +16,9 @@ Can read and edit files, execute shell commands, and search the web.
```sh
# Install.
+brew install 'gemini-cli'
npm install -g '@google/gemini-cli'
+port install 'gemini-cli'
# Run without installation.
docker run --rm -it 'us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.1.1'
@@ -32,6 +34,9 @@ export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
Usage
```sh
+# Show version.
+gemini --version
+
# Start.
gemini
diff --git a/knowledge base/ai/ollama.md b/knowledge base/ai/ollama.md
index 7bee45a..0338a01 100644
--- a/knowledge base/ai/ollama.md
+++ b/knowledge base/ai/ollama.md
@@ -32,11 +32,25 @@ Excellent for building applications that require seamless migration from OpenAI.
```sh
brew install --cask 'ollama-app' # or just brew install 'ollama'
+curl -fsSL 'https://ollama.com/install.sh' | sh
docker pull 'ollama/ollama'
# Run in containers.
docker run -d -v 'ollama:/root/.ollama' -p '11434:11434' --name 'ollama' 'ollama/ollama'
docker run -d --gpus='all' … 'ollama/ollama'
+
+# Expose (bind) the server to specific IP addresses and/or with custom ports.
+# Default is 127.0.0.1 on port 11434.
+# Only valid for the *'serve'* command.
+OLLAMA_HOST='some.fqdn:11435' ollama serve
+
+# Use a custom context length.
+# Only valid for the *'serve'* command.
+OLLAMA_CONTEXT_LENGTH=64000 ollama serve
+
+# Use a remotely served model.
+# Valid for all commands *but* 'serve'.
+OLLAMA_HOST='some.fqdn:11435' ollama …
```
@@ -86,18 +100,21 @@ The model can describe, classify, and answer questions about what it sees.
Usage
```sh
+# Start the server.
+ollama serve
+
+# Verify the server is running.
+curl 'http://localhost:11434/'
+
# Access the API via cURL.
curl 'http://localhost:11434/api/generate' -d '{
"model": "gemma3",
"prompt": "Why is the sky blue?"
}'
-# Expose (bind) the server to specific IP addresses and/or with custom ports.
-# Default is 127.0.0.1 on port 11434.
-OLLAMA_HOST='some.fqdn:11435'
-
# Start the interactive menu.
ollama
+ollama launch
# Download models.
ollama pull 'qwen2.5-coder:7b'
@@ -107,9 +124,8 @@ ollama pull 'glm-4.7:cloud'
ollama list
ollama ls
-# Start Ollama.
-ollama serve
-OLLAMA_CONTEXT_LENGTH=64000 ollama serve
+# Show models information.
+ollama show 'codellama:13b'
# Run models interactively.
ollama run 'gemma3'
@@ -120,10 +136,7 @@ ollama run 'glm-4.7-flash:q4_K_M' 'Hi! Are you there?' --verbose
ollama run 'deepseek-r1' --think=false "Summarize this article"
ollama run 'gemma3' --hidethinking "Is 9.9 bigger or 9.11?"
ollama run 'gpt-oss' --think=low "Draft a headline"
-ollama run 'gemma3' './image.png' "what's in this image?"
-
-# Quickly set up a coding tool with Ollama models.
-ollama launch
+ollama run 'gemma3' './image.png' "what's in this image?" --temperature '0.8' --top-p '0.9'
# Launch integrations.
ollama launch 'opencode'
@@ -168,26 +181,27 @@ ollama signout
+
## Further readings
- [Website]
- [Codebase]
- [Blog]
+- [Models library]
### Sources
- [Documentation]
+- [The Complete Guide to Ollama: Run Large Language Models Locally]
+[The Complete Guide to Ollama: Run Large Language Models Locally]: https://dev.to/ajitkumar/the-complete-guide-to-ollama-run-large-language-models-locally-2mge
diff --git a/knowledge base/ai/opencode.md b/knowledge base/ai/opencode.md
index 95a2f2c..ff81617 100644
--- a/knowledge base/ai/opencode.md
+++ b/knowledge base/ai/opencode.md
@@ -23,23 +23,77 @@ docker run -it --rm 'ghcr.io/anomalyco/opencode'
mise use -g 'opencode'
nix run 'nixpkgs#opencode'
npm i -g 'opencode-ai@latest'
+pacman -S 'opencode'
paru -S 'opencode-bin'
# Desktop app
brew install --cask 'opencode-desktop'
```
+Configure OpenCode using `opencode.json` (or `.jsonc`) configuration files.
+Configuration files are merged, not replaced. Settings from more specific ones override those of the same name in less
+specific ones.
+
+| Scope | Location | Summary |
+| ------- | ----------------------------------------------- | ------------------------- |
+| Remote | `.well-known/opencode` | organizational defaults |
+| Global | `~/.config/opencode/opencode.json` | user preferences |
+| Custom | `OPENCODE_CONFIG` environment variable | custom overrides |
+| Project | `opencode.json` in the project's directory | project-specific settings |
+| Agent | `.opencode` directories | agents, commands, plugins |
+| Inline | `OPENCODE_CONFIG_CONTENT` environment variables | runtime overrides |
+
+```json
+{
+ "$schema": "https://opencode.ai/config.json",
+ "provider": {
+ "ollama": {
+ "npm": "@ai-sdk/openai-compatible",
+ "name": "Ollama (local)",
+ "options": {
+ "baseURL": "http://localhost:11434/v1"
+ },
+ "models": {
+ "llama2": {
+ "name": "Llama 2"
+ }
+ }
+ }
+ }
+}
+```
+
-