mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-09 05:44:23 +00:00
docs(kb): add ai related articles
This commit is contained in:
@@ -18,7 +18,7 @@ TODO
|
||||
## Further readings
|
||||
|
||||
- [Useful AI]: tools, courses, and more, curated and reviewed by experts.
|
||||
- [ChatGPT], [Claude], [Copilot], [Duck AI], [Gemini]
|
||||
- LLMs: [ChatGPT], [Claude], [Copilot], [Duck AI], [Gemini]
|
||||
|
||||
### Sources
|
||||
|
||||
|
||||
67
knowledge base/ai/huggingface.md
Normal file
67
knowledge base/ai/huggingface.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Hugging Face
|
||||
|
||||
AI community trying to democratize _good_ machine learning.
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Setup</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Real world use cases</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Codebase]
|
||||
- [Blog]
|
||||
- [Learning material]
|
||||
|
||||
### Sources
|
||||
|
||||
- [Documentation]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
[Blog]: https://huggingface.co/blog
|
||||
[Codebase]: https://github.com/huggingface
|
||||
[Documentation]: https://huggingface.co/docs
|
||||
[Learning material]: https://huggingface.co/learn
|
||||
[Website]: https://huggingface.co/
|
||||
|
||||
<!-- Others -->
|
||||
67
knowledge base/ai/lmstudio.md
Normal file
67
knowledge base/ai/lmstudio.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# LMStudio
|
||||
|
||||
Allows running LLMs locally.
|
||||
|
||||
<!-- Remove this line to uncomment if used
|
||||
## Table of contents <!-- omit in toc -->
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
<details>
|
||||
<summary>Setup</summary>
|
||||
|
||||
```sh
|
||||
brew install --cask 'lm-studio'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Real world use cases</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Codebase]
|
||||
- [Blog]
|
||||
|
||||
### Sources
|
||||
|
||||
- [Documentation]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
[Blog]: https://lmstudio.ai/blog
|
||||
[Codebase]: https://github.com/lmstudio-ai
|
||||
[Documentation]: https://lmstudio.ai/docs/
|
||||
[Website]: https://lmstudio.ai/
|
||||
|
||||
<!-- Others -->
|
||||
90
knowledge base/ai/ollama.md
Normal file
90
knowledge base/ai/ollama.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Ollama
|
||||
|
||||
The easiest way to get up and running with large language models.
|
||||
|
||||
<!-- Remove this line to uncomment if used
|
||||
## Table of contents <!-- omit in toc -->
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
<details>
|
||||
<summary>Setup</summary>
|
||||
|
||||
```sh
|
||||
brew install --cask 'ollama-app' # or just brew install 'ollama'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Cloud models are automatically offloaded to Ollama's cloud service.<br/>
|
||||
This allows to keep using one's local tools while running larger models that wouldn't fit on a personal computer.<br/>
|
||||
Those models are _usually_ tagged with the `cloud` suffix.
|
||||
|
||||
<details>
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
# Download models.
|
||||
ollama pull 'qwen2.5-coder:7b'
|
||||
ollama pull 'glm-4.7:cloud'
|
||||
|
||||
# List pulled models.
|
||||
ollama list
|
||||
|
||||
# Run models.
|
||||
ollama run 'gemma3'
|
||||
|
||||
# Quickly set up a coding tool with Ollama models.
|
||||
ollama launch
|
||||
|
||||
# Launch models.
|
||||
ollama launch 'claude' --model 'glm-4.7-flash'
|
||||
|
||||
# Only configure model, without launching them.
|
||||
ollama launch 'claude' --config
|
||||
|
||||
# Sign into Ollama cloud, or create a new account.
|
||||
ollama signin
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Real world use cases</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Codebase]
|
||||
- [Blog]
|
||||
|
||||
### Sources
|
||||
|
||||
- [Documentation]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
[Blog]: https://ollama.com/blog
|
||||
[Codebase]: https://github.com/ollama/ollama
|
||||
[Documentation]: https://docs.ollama.com/
|
||||
[Website]: https://ollama.com/
|
||||
|
||||
<!-- Others -->
|
||||
73
knowledge base/ai/openhands.md
Normal file
73
knowledge base/ai/openhands.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# OpenHands
|
||||
|
||||
Community focused on AI-driven development.
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
<details>
|
||||
<summary>Setup</summary>
|
||||
|
||||
```sh
|
||||
docker pull 'docker.openhands.dev/openhands/runtime:1.2-nikolaik'
|
||||
docker run -it --rm --name 'openhands-app' --pull='always' \
|
||||
-e 'AGENT_SERVER_IMAGE_REPOSITORY=docker.openhands.dev/openhands/runtime' \
|
||||
-e 'AGENT_SERVER_IMAGE_TAG=1.2-nikolaik' \
|
||||
-e 'LOG_ALL_EVENTS=true' \
|
||||
-v '/var/run/docker.sock:/var/run/docker.sock' \
|
||||
-v "$HOME/.openhands:/.openhands" \
|
||||
-p '3000:3000' \
|
||||
--add-host 'host.docker.internal:host-gateway' \
|
||||
'docker.openhands.dev/openhands/openhands:1.2'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Usage</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
<!-- Uncomment if used
|
||||
<details>
|
||||
<summary>Real world use cases</summary>
|
||||
|
||||
```sh
|
||||
```
|
||||
|
||||
</details>
|
||||
-->
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Codebase]
|
||||
- [Blog]
|
||||
|
||||
### Sources
|
||||
|
||||
- [Documentation]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
[Blog]: https://openhands.dev/blog
|
||||
[Codebase]: https://github.com/OpenHands/OpenHands
|
||||
[Documentation]: https://docs.openhands.dev/
|
||||
[Website]: https://openhands.dev/
|
||||
|
||||
<!-- Others -->
|
||||
@@ -0,0 +1,56 @@
|
||||
# Use a local LLM for coding assistance in VSCode
|
||||
|
||||
1. [Setup](#setup)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install Ollama and download the models you mean to use.<br/>
|
||||
Refer [Ollama].
|
||||
|
||||
<details style='padding: 0 0 1rem 1rem'>
|
||||
|
||||
```sh
|
||||
brew install 'ollama'
|
||||
ollama pull 'llama3.1:8b'
|
||||
ollama pull 'qwen2.5-coder:7b'
|
||||
ollama pull 'nomic-embed-text:latest'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
1. Install the [Continue VSCode extension].
|
||||
1. Configure the extension to use local LLMs only.
|
||||
|
||||
<details style='padding: 0 0 1rem 1rem'>
|
||||
|
||||
1. Open the extension's sidebar.
|
||||
1. In the top-right, select Local Config from the dropdown menu.
|
||||
1. Eventually, tweak the configuration file.
|
||||
|
||||
</details>
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Ollama]
|
||||
- [Continue VSCode extension]
|
||||
|
||||
### Sources
|
||||
|
||||
- [How to use a local LLM as a free coding copilot in VS Code]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
[Ollama]: ollama.md
|
||||
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
<!-- Others -->
|
||||
[Continue VSCode extension]: https://marketplace.visualstudio.com/items?itemName=Continue.continue
|
||||
[How to use a local LLM as a free coding copilot in VS Code]: https://medium.com/@smfraser/how-to-use-a-local-llm-as-a-free-coding-copilot-in-vs-code-6dffc053369d
|
||||
@@ -90,6 +90,7 @@
|
||||
| LAN | Local Area Network | |
|
||||
| LED | Light Emitting Diode | |
|
||||
| LIFO | Last In First Out | |
|
||||
| LLM | Large Language Model | |
|
||||
| M2COTS | Mass Market COTS | Widely available COTS products |
|
||||
| MR | Merge Request | Prevalently used in GitLab |
|
||||
| NACL | Network ACL | |
|
||||
|
||||
Reference in New Issue
Block a user