mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-16 08:44:25 +00:00
70 lines
1004 B
Markdown
70 lines
1004 B
Markdown
# llama.cpp
|
|
|
|
> TODO
|
|
|
|
LLM inference engine written in in C/C++.<br/>
|
|
Vastly used as base for AI tools like [Ollama] and [Docker model runner].
|
|
|
|
<!-- Remove this line to uncomment if used
|
|
## Table of contents <!-- omit in toc -->
|
|
|
|
1. [TL;DR](#tldr)
|
|
1. [Further readings](#further-readings)
|
|
1. [Sources](#sources)
|
|
|
|
## TL;DR
|
|
|
|
<!-- Uncomment if used
|
|
<details>
|
|
<summary>Setup</summary>
|
|
|
|
```sh
|
|
```
|
|
|
|
</details>
|
|
-->
|
|
|
|
<!-- Uncomment if used
|
|
<details>
|
|
<summary>Usage</summary>
|
|
|
|
```sh
|
|
```
|
|
|
|
</details>
|
|
-->
|
|
|
|
<!-- Uncomment if used
|
|
<details>
|
|
<summary>Real world use cases</summary>
|
|
|
|
```sh
|
|
```
|
|
|
|
</details>
|
|
-->
|
|
|
|
## Further readings
|
|
|
|
- [Codebase]
|
|
- [ik_llama.cpp]
|
|
|
|
### Sources
|
|
|
|
<!--
|
|
Reference
|
|
═╬═Time══
|
|
-->
|
|
|
|
<!-- In-article sections -->
|
|
<!-- Knowledge base -->
|
|
[Docker model runner]: ../docker.md#running-llms-locally
|
|
[Ollama]: ollama.md
|
|
|
|
<!-- Files -->
|
|
<!-- Upstream -->
|
|
[Codebase]: https://github.com/ggml-org/llama.cpp
|
|
|
|
<!-- Others -->
|
|
[ik_llama.cpp]: https://github.com/ikawrakow/ik_llama.cpp
|