# llama.cpp > TODO LLM inference engine written in in C/C++.
Vastly used as base for AI tools like [Ollama] and [Docker model runner]. 1. [TL;DR](#tldr) 1. [Further readings](#further-readings) 1. [Sources](#sources) ## TL;DR ## Further readings - [Codebase] - [ik_llama.cpp] ### Sources [Docker model runner]: ../docker.md#running-llms-locally [Ollama]: ollama.md [Codebase]: https://github.com/ggml-org/llama.cpp [ik_llama.cpp]: https://github.com/ikawrakow/ik_llama.cpp