mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-09 05:44:23 +00:00
1.3 KiB
1.3 KiB
Use a local LLM for coding assistance in VSCode
Setup
-
Install Ollama and download the models you mean to use.
Refer Ollama.brew install 'ollama' ollama pull 'llama3.1:8b' ollama pull 'qwen2.5-coder:7b' ollama pull 'nomic-embed-text:latest' -
Install the Continue VSCode extension.
-
Configure the extension to use local LLMs only.
- Open the extension's sidebar.
- In the top-right, select Local Config from the dropdown menu.
- Eventually, tweak the configuration file.