diff --git a/knowledge base/ai/README.md b/knowledge base/ai/README.md
index 5bd4356..9595345 100644
--- a/knowledge base/ai/README.md
+++ b/knowledge base/ai/README.md
@@ -18,7 +18,7 @@ TODO
## Further readings
- [Useful AI]: tools, courses, and more, curated and reviewed by experts.
-- [ChatGPT], [Claude], [Copilot], [Duck AI], [Gemini]
+- LLMs: [ChatGPT], [Claude], [Copilot], [Duck AI], [Gemini]
### Sources
diff --git a/knowledge base/ai/huggingface.md b/knowledge base/ai/huggingface.md
new file mode 100644
index 0000000..6cb8ea4
--- /dev/null
+++ b/knowledge base/ai/huggingface.md
@@ -0,0 +1,67 @@
+# Hugging Face
+
+AI community trying to democratize _good_ machine learning.
+
+1. [TL;DR](#tldr)
+1. [Further readings](#further-readings)
+ 1. [Sources](#sources)
+
+## TL;DR
+
+
+
+
+
+
+
+## Further readings
+
+- [Website]
+- [Codebase]
+- [Blog]
+- [Learning material]
+
+### Sources
+
+- [Documentation]
+
+
+
+
+
+
+
+[Blog]: https://huggingface.co/blog
+[Codebase]: https://github.com/huggingface
+[Documentation]: https://huggingface.co/docs
+[Learning material]: https://huggingface.co/learn
+[Website]: https://huggingface.co/
+
+
diff --git a/knowledge base/ai/lmstudio.md b/knowledge base/ai/lmstudio.md
new file mode 100644
index 0000000..90fd8da
--- /dev/null
+++ b/knowledge base/ai/lmstudio.md
@@ -0,0 +1,67 @@
+# LMStudio
+
+Allows running LLMs locally.
+
+
+
+1. [TL;DR](#tldr)
+1. [Further readings](#further-readings)
+ 1. [Sources](#sources)
+
+## TL;DR
+
+
+ Setup
+
+```sh
+brew install --cask 'lm-studio'
+```
+
+
+
+
+
+
+
+## Further readings
+
+- [Website]
+- [Codebase]
+- [Blog]
+
+### Sources
+
+- [Documentation]
+
+
+
+
+
+
+
+[Blog]: https://lmstudio.ai/blog
+[Codebase]: https://github.com/lmstudio-ai
+[Documentation]: https://lmstudio.ai/docs/
+[Website]: https://lmstudio.ai/
+
+
diff --git a/knowledge base/ai/ollama.md b/knowledge base/ai/ollama.md
new file mode 100644
index 0000000..8a99d14
--- /dev/null
+++ b/knowledge base/ai/ollama.md
@@ -0,0 +1,90 @@
+# Ollama
+
+The easiest way to get up and running with large language models.
+
+
+
+1. [TL;DR](#tldr)
+1. [Further readings](#further-readings)
+ 1. [Sources](#sources)
+
+## TL;DR
+
+
+ Setup
+
+```sh
+brew install --cask 'ollama-app' # or just brew install 'ollama'
+```
+
+
+
+Cloud models are automatically offloaded to Ollama's cloud service.
+This allows to keep using one's local tools while running larger models that wouldn't fit on a personal computer.
+Those models are _usually_ tagged with the `cloud` suffix.
+
+
+ Usage
+
+```sh
+# Download models.
+ollama pull 'qwen2.5-coder:7b'
+ollama pull 'glm-4.7:cloud'
+
+# List pulled models.
+ollama list
+
+# Run models.
+ollama run 'gemma3'
+
+# Quickly set up a coding tool with Ollama models.
+ollama launch
+
+# Launch models.
+ollama launch 'claude' --model 'glm-4.7-flash'
+
+# Only configure model, without launching them.
+ollama launch 'claude' --config
+
+# Sign into Ollama cloud, or create a new account.
+ollama signin
+```
+
+
+
+
+
+## Further readings
+
+- [Website]
+- [Codebase]
+- [Blog]
+
+### Sources
+
+- [Documentation]
+
+
+
+
+
+
+
+[Blog]: https://ollama.com/blog
+[Codebase]: https://github.com/ollama/ollama
+[Documentation]: https://docs.ollama.com/
+[Website]: https://ollama.com/
+
+
diff --git a/knowledge base/ai/openhands.md b/knowledge base/ai/openhands.md
new file mode 100644
index 0000000..21e963c
--- /dev/null
+++ b/knowledge base/ai/openhands.md
@@ -0,0 +1,73 @@
+# OpenHands
+
+Community focused on AI-driven development.
+
+1. [TL;DR](#tldr)
+1. [Further readings](#further-readings)
+ 1. [Sources](#sources)
+
+## TL;DR
+
+
+ Setup
+
+```sh
+docker pull 'docker.openhands.dev/openhands/runtime:1.2-nikolaik'
+docker run -it --rm --name 'openhands-app' --pull='always' \
+ -e 'AGENT_SERVER_IMAGE_REPOSITORY=docker.openhands.dev/openhands/runtime' \
+ -e 'AGENT_SERVER_IMAGE_TAG=1.2-nikolaik' \
+ -e 'LOG_ALL_EVENTS=true' \
+ -v '/var/run/docker.sock:/var/run/docker.sock' \
+ -v "$HOME/.openhands:/.openhands" \
+ -p '3000:3000' \
+ --add-host 'host.docker.internal:host-gateway' \
+ 'docker.openhands.dev/openhands/openhands:1.2'
+```
+
+
+
+
+
+
+
+## Further readings
+
+- [Website]
+- [Codebase]
+- [Blog]
+
+### Sources
+
+- [Documentation]
+
+
+
+
+
+
+
+[Blog]: https://openhands.dev/blog
+[Codebase]: https://github.com/OpenHands/OpenHands
+[Documentation]: https://docs.openhands.dev/
+[Website]: https://openhands.dev/
+
+
diff --git a/knowledge base/ai/use a local llm for coding assistance in vscode.md b/knowledge base/ai/use a local llm for coding assistance in vscode.md
new file mode 100644
index 0000000..66e7491
--- /dev/null
+++ b/knowledge base/ai/use a local llm for coding assistance in vscode.md
@@ -0,0 +1,56 @@
+# Use a local LLM for coding assistance in VSCode
+
+1. [Setup](#setup)
+1. [Further readings](#further-readings)
+ 1. [Sources](#sources)
+
+## Setup
+
+1. Install Ollama and download the models you mean to use.
+ Refer [Ollama].
+
+
+
+ ```sh
+ brew install 'ollama'
+ ollama pull 'llama3.1:8b'
+ ollama pull 'qwen2.5-coder:7b'
+ ollama pull 'nomic-embed-text:latest'
+ ```
+
+
+
+1. Install the [Continue VSCode extension].
+1. Configure the extension to use local LLMs only.
+
+
+
+ 1. Open the extension's sidebar.
+ 1. In the top-right, select Local Config from the dropdown menu.
+ 1. Eventually, tweak the configuration file.
+
+
+
+## Further readings
+
+- [Ollama]
+- [Continue VSCode extension]
+
+### Sources
+
+- [How to use a local LLM as a free coding copilot in VS Code]
+
+
+
+
+
+[Ollama]: ollama.md
+
+
+
+
+[Continue VSCode extension]: https://marketplace.visualstudio.com/items?itemName=Continue.continue
+[How to use a local LLM as a free coding copilot in VS Code]: https://medium.com/@smfraser/how-to-use-a-local-llm-as-a-free-coding-copilot-in-vs-code-6dffc053369d
diff --git a/knowledge base/jargon.md b/knowledge base/jargon.md
index 617b980..55dfb92 100644
--- a/knowledge base/jargon.md
+++ b/knowledge base/jargon.md
@@ -90,6 +90,7 @@
| LAN | Local Area Network | |
| LED | Light Emitting Diode | |
| LIFO | Last In First Out | |
+| LLM | Large Language Model | |
| M2COTS | Mass Market COTS | Widely available COTS products |
| MR | Merge Request | Prevalently used in GitLab |
| NACL | Network ACL | |