mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-17 17:24:25 +00:00
chore(kb/ai): review and expand
This commit is contained in:
@@ -57,6 +57,9 @@ claude -c
|
||||
|
||||
# Resume a previous conversation
|
||||
claude -r
|
||||
|
||||
# Add MCPs
|
||||
claude mcp add --transport 'sse' 'linear-server' 'https://mcp.linear.app/sse'
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -109,7 +112,7 @@ Claude Code version: `v2.1.41`.<br/>
|
||||
| llama.cpp (ollama) | 16384 | 20 GB | No | 2m 16s | No |
|
||||
| llama.cpp (ollama) | 32768 | 22 GB | No | 7.12s | No |
|
||||
| llama.cpp (ollama) | 65536 | 25 GB | No? (unsure) | 10.25s | Meh (minor stutters) |
|
||||
| llama.cpp (ollama) | 131072 | 33 GB | No | 3m 42s | **No** (major stutters) |
|
||||
| llama.cpp (ollama) | 131072 | 33 GB | **Yes** | 3m 42s | **No** (major stutters) |
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
@@ -16,7 +16,9 @@ Can read and edit files, execute shell commands, and search the web.
|
||||
|
||||
```sh
|
||||
# Install.
|
||||
brew install 'gemini-cli'
|
||||
npm install -g '@google/gemini-cli'
|
||||
port install 'gemini-cli'
|
||||
|
||||
# Run without installation.
|
||||
docker run --rm -it 'us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.1.1'
|
||||
@@ -32,6 +34,9 @@ export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
# Show version.
|
||||
gemini --version
|
||||
|
||||
# Start.
|
||||
gemini
|
||||
|
||||
|
||||
@@ -32,11 +32,25 @@ Excellent for building applications that require seamless migration from OpenAI.
|
||||
|
||||
```sh
|
||||
brew install --cask 'ollama-app' # or just brew install 'ollama'
|
||||
curl -fsSL 'https://ollama.com/install.sh' | sh
|
||||
docker pull 'ollama/ollama'
|
||||
|
||||
# Run in containers.
|
||||
docker run -d -v 'ollama:/root/.ollama' -p '11434:11434' --name 'ollama' 'ollama/ollama'
|
||||
docker run -d --gpus='all' … 'ollama/ollama'
|
||||
|
||||
# Expose (bind) the server to specific IP addresses and/or with custom ports.
|
||||
# Default is 127.0.0.1 on port 11434.
|
||||
# Only valid for the *'serve'* command.
|
||||
OLLAMA_HOST='some.fqdn:11435' ollama serve
|
||||
|
||||
# Use a custom context length.
|
||||
# Only valid for the *'serve'* command.
|
||||
OLLAMA_CONTEXT_LENGTH=64000 ollama serve
|
||||
|
||||
# Use a remotely served model.
|
||||
# Valid for all commands *but* 'serve'.
|
||||
OLLAMA_HOST='some.fqdn:11435' ollama …
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -86,18 +100,21 @@ The model can describe, classify, and answer questions about what it sees.
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
# Start the server.
|
||||
ollama serve
|
||||
|
||||
# Verify the server is running.
|
||||
curl 'http://localhost:11434/'
|
||||
|
||||
# Access the API via cURL.
|
||||
curl 'http://localhost:11434/api/generate' -d '{
|
||||
"model": "gemma3",
|
||||
"prompt": "Why is the sky blue?"
|
||||
}'
|
||||
|
||||
# Expose (bind) the server to specific IP addresses and/or with custom ports.
|
||||
# Default is 127.0.0.1 on port 11434.
|
||||
OLLAMA_HOST='some.fqdn:11435'
|
||||
|
||||
# Start the interactive menu.
|
||||
ollama
|
||||
ollama launch
|
||||
|
||||
# Download models.
|
||||
ollama pull 'qwen2.5-coder:7b'
|
||||
@@ -107,9 +124,8 @@ ollama pull 'glm-4.7:cloud'
|
||||
ollama list
|
||||
ollama ls
|
||||
|
||||
# Start Ollama.
|
||||
ollama serve
|
||||
OLLAMA_CONTEXT_LENGTH=64000 ollama serve
|
||||
# Show models information.
|
||||
ollama show 'codellama:13b'
|
||||
|
||||
# Run models interactively.
|
||||
ollama run 'gemma3'
|
||||
@@ -120,10 +136,7 @@ ollama run 'glm-4.7-flash:q4_K_M' 'Hi! Are you there?' --verbose
|
||||
ollama run 'deepseek-r1' --think=false "Summarize this article"
|
||||
ollama run 'gemma3' --hidethinking "Is 9.9 bigger or 9.11?"
|
||||
ollama run 'gpt-oss' --think=low "Draft a headline"
|
||||
ollama run 'gemma3' './image.png' "what's in this image?"
|
||||
|
||||
# Quickly set up a coding tool with Ollama models.
|
||||
ollama launch
|
||||
ollama run 'gemma3' './image.png' "what's in this image?" --temperature '0.8' --top-p '0.9'
|
||||
|
||||
# Launch integrations.
|
||||
ollama launch 'opencode'
|
||||
@@ -168,26 +181,27 @@ ollama signout
|
||||
|
||||
</details>
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Real world use cases</summary>
|
||||
|
||||
```sh
|
||||
# Run Claude Code on a model served locally by Ollama.
|
||||
ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 ANTHROPIC_API_KEY="" \
|
||||
claude --model 'lfm2.5-thinking:1.2b'
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Codebase]
|
||||
- [Blog]
|
||||
- [Models library]
|
||||
|
||||
### Sources
|
||||
|
||||
- [Documentation]
|
||||
- [The Complete Guide to Ollama: Run Large Language Models Locally]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
@@ -203,6 +217,8 @@ ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 ANTHROPIC_
|
||||
[Blog]: https://ollama.com/blog
|
||||
[Codebase]: https://github.com/ollama/ollama
|
||||
[Documentation]: https://docs.ollama.com/
|
||||
[Models library]: https://ollama.com/library
|
||||
[Website]: https://ollama.com/
|
||||
|
||||
<!-- Others -->
|
||||
[The Complete Guide to Ollama: Run Large Language Models Locally]: https://dev.to/ajitkumar/the-complete-guide-to-ollama-run-large-language-models-locally-2mge
|
||||
|
||||
@@ -23,23 +23,77 @@ docker run -it --rm 'ghcr.io/anomalyco/opencode'
|
||||
mise use -g 'opencode'
|
||||
nix run 'nixpkgs#opencode'
|
||||
npm i -g 'opencode-ai@latest'
|
||||
pacman -S 'opencode'
|
||||
paru -S 'opencode-bin'
|
||||
|
||||
# Desktop app
|
||||
brew install --cask 'opencode-desktop'
|
||||
```
|
||||
|
||||
Configure OpenCode using `opencode.json` (or `.jsonc`) configuration files.<br/>
|
||||
Configuration files are merged, not replaced. Settings from more specific ones override those of the same name in less
|
||||
specific ones.
|
||||
|
||||
| Scope | Location | Summary |
|
||||
| ------- | ----------------------------------------------- | ------------------------- |
|
||||
| Remote | `.well-known/opencode` | organizational defaults |
|
||||
| Global | `~/.config/opencode/opencode.json` | user preferences |
|
||||
| Custom | `OPENCODE_CONFIG` environment variable | custom overrides |
|
||||
| Project | `opencode.json` in the project's directory | project-specific settings |
|
||||
| Agent | `.opencode` directories | agents, commands, plugins |
|
||||
| Inline | `OPENCODE_CONFIG_CONTENT` environment variables | runtime overrides |
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://opencode.ai/config.json",
|
||||
"provider": {
|
||||
"ollama": {
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"name": "Ollama (local)",
|
||||
"options": {
|
||||
"baseURL": "http://localhost:11434/v1"
|
||||
},
|
||||
"models": {
|
||||
"llama2": {
|
||||
"name": "Llama 2"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
# Start in interactive mode.
|
||||
opencode
|
||||
opencode 'path/to/directory'
|
||||
|
||||
# List available models.
|
||||
opencode models
|
||||
opencode models 'anthropic'
|
||||
|
||||
# Update the cashed models list.
|
||||
opencode models --refresh
|
||||
|
||||
# Run tasks in headless mode.
|
||||
opencode run "Explain how closures work in JavaScript"
|
||||
|
||||
# Start a headless server.
|
||||
opencode serve
|
||||
|
||||
# Attach to headless servers.
|
||||
opencode run --attach 'http://localhost:4096' "Explain async/await in JavaScript"
|
||||
|
||||
# List existing agents.
|
||||
opencode agent list
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
|
||||
Reference in New Issue
Block a user