diff --git a/knowledge base/loki.md b/knowledge base/loki.md
index d893a03..235a15b 100644
--- a/knowledge base/loki.md
+++ b/knowledge base/loki.md
@@ -61,7 +61,16 @@ helm --namespace 'loki' upgrade --create-namespace --install --cleanup-on-fail '
--repo 'https://grafana.github.io/helm-charts' 'loki-distributed'
```
-Default configuration file for package-based installations is `/etc/loki/config.yml` or `/etc/loki/loki.yaml`.
+On startup, Loki tries to load a configuration file named `config.yaml` from the current working directory, or from the
+`config/` subdirectory if the first does not exist.
+Should none of those files exist, it **will** give up and fail.
+
+The default configuration file **for package-based installations** is located at `/etc/loki/config.yml` or
+`/etc/loki/loki.yaml`.
+The docker image tries using `/etc/loki/local-config.yml` by default (via the `COMMAND` setting).
+
+Some settings are currently **not** reachable via direct CLI flags (e.g. `schema_configs`, `storage_config.aws.*`).
+Use a configuration file for those.
Disable reporting
@@ -79,20 +88,32 @@ Default configuration file for package-based installations is `/etc/loki/config.
Usage
```sh
-# Verify configuration files
+# Get help.
+loki -help
+docker run --rm --name 'loki-help' 'grafana/loki:3.3.2' -help
+
+# Verify configuration files.
loki -verify-config
-loki -config.file='/etc/loki/local-config.yaml' -verify-config
+loki -config.file='/etc/loki/local-config.yaml' -print-config-stderr -verify-config
+docker run --rm 'grafana/loki' \
+ -verify-config -config.file '/etc/loki/local-config.yaml'
+docker run --rm -v "$PWD/custom-config.yaml:/etc/loki/local-config.yaml:ro" 'grafana/loki:3.3.2' \
+ -verify-config -config.file '/etc/loki/local-config.yaml' -log-config-reverse-order
-# List available component targets
+# List available component targets.
loki -list-targets
-docker run 'docker.io/grafana/loki' -config.file='/etc/loki/local-config.yaml' -list-targets
+docker run --rm 'docker.io/grafana/loki' -config.file='/etc/loki/local-config.yaml' -list-targets
-# Start server components
+# Start server components.
loki
loki -target='all'
loki -config.file='/etc/loki/config.yaml' -target='read'
-# Print the final configuration to stderr and start
+# Print the final configuration to the logs and continue.
+loki -log-config-reverse-order …
+loki -log.level='info' -log-config-reverse-order …
+
+# Print the final configuration to stderr and continue.
loki -print-config-stderr …
# Check the server is working
@@ -108,6 +129,33 @@ curl 'http://loki.fqdn:3101/ready'
curl 'http://loki.fqdn:3102/ready'
```
+```plaintext
+GET /ready
+GET /metrics
+GET /services
+```
+
+
+
+
+ Real world use cases
+
+```sh
+# Check some values are applied to the final configuration.
+docker run --rm --name 'validate-cli-config' 'grafana/loki:3.3.2' \
+ -verify-config -config.file '/etc/loki/local-config.yaml' -print-config-stderr \
+ -common.storage.ring.instance-addr='127.0.0.1' -server.path-prefix='/loki' \
+ 2>&1 \
+| grep -E 'instance_addr' -E 'path_prefix'
+docker run --rm --name 'validate-cli-config' 'grafana/loki:3.3.2' \
+ -verify-config -config.file '/etc/loki/local-config.yaml' -print-config-stderr \
+ -common.storage.ring.instance-addr='127.0.0.1' -server.path-prefix='/loki' \
+ 2> '/tmp/loki.config.yaml' \
+&& sed '/msg="config is valid"/d' '/tmp/loki.config.yaml' \
+| yq -Sy '.common|.instance_addr,.path_prefix' - \
+|| cat '/tmp/loki.config.yaml'
+```
+
## Components
diff --git a/knowledge base/mimir.md b/knowledge base/mimir.md
index 7487042..4f593d2 100644
--- a/knowledge base/mimir.md
+++ b/knowledge base/mimir.md
@@ -31,29 +31,34 @@ and set up alerting rules across multiple tenants to leverage tenant federation.
Scrapers (like Prometheus or Grafana's Alloy) need to send metrics data to Mimir.
Mimir will **not** scrape metrics itself.
-Mimir listens by default on port `8080` for HTTP and on port `9095` for GRPC.
+The server listens by default on port `8080` for HTTP and on port `9095` for GRPC.
It also internally advertises data or actions to members in the cluster using [hashicorp/memberlist], which implements a
[gossip protocol]. This uses port `7946` by default, and **must** be reachable by all members in the cluster to work.
-Mimir stores time series in TSDB blocks, that are uploaded to an object storage bucket.
-Such blocks are the same that Prometheus and Thanos use, though each application stores blocks in different places and
-uses slightly different metadata files for them.
+Mimir stores time series in TSDB blocks on the local file system by default.
+It can uploaded those blocks to an object storage bucket.
+
+The data blocks use the same format that Prometheus and Thanos use for storage, though each application stores blocks in
+different places and uses slightly different metadata files for them.
Mimir supports multiple tenants, and stores blocks on a **per-tenant** level.
-Multi-tenancy is enabled by default, and can be disabled using the `-auth.multitenancy-enabled=false` option.
-If enabled, then multi-tenancy **will require every API request** to have the `X-Scope-OrgID` header with the value set
+Multi-tenancy is **enabled** by default. It can be disabled using the `-auth.multitenancy-enabled=false` option.
+
+Multi-tenancy, if **enabled**, **will require every API request** to have the `X-Scope-OrgID` header with the value set
to the tenant ID one is authenticating for.
-When multi-tenancy is **disabled**, it will only manage a single tenant going by the name `anonymous`.
+When multi-tenancy is **disabled**, Mimir will store everything under a single tenant going by the name `anonymous`, and
+will assume all API requests are for it by automatically filling the `X-Scope-OrgID` header if not given.
Blocks can be uploaded using the `mimirtool` utility, so that Mimir can access them.
-Mimir **will** perform some sanitization and validation of each block's metadata.
+The server **will** perform some sanitization and validation of each block's metadata.
```sh
mimirtool backfill --address='http://mimir.example.org' --id='anonymous' 'block_1' … 'block_N'
```
As a result of validation, Mimir will probably reject Thanos' blocks due to unsupported labels.
-As a workaround, upload Thanos' blocks directly to Mimir's blocks bucket, using the `//` prefix.
+As a workaround, upload Thanos' blocks directly to Mimir's blocks directory, or bucket using the `//`
+prefix.
Setup
@@ -122,7 +127,7 @@ GET /metrics
Mimir's configuration file is YAML-based.
There is **no** default configuration file, but one _can_ be specified on launch.
-If no configuration file is given, **only** the default values will be used.
+If no configuration file nor CLI options are given, **only** the default values will be used.
```sh
mimir --config.file='./demo.yaml'
diff --git a/knowledge base/prometheus/README.md b/knowledge base/prometheus/README.md
index b36f4ee..95564e2 100644
--- a/knowledge base/prometheus/README.md
+++ b/knowledge base/prometheus/README.md
@@ -463,7 +463,7 @@ Typically achieved by:
1. Running a separate AlertManager instance.
This would handle alerts from **all** the Prometheus instances, automatically managing eventually duplicated data.
1. Using tools like [Thanos], [Cortex], or Grafana's [Mimir] to aggregate and deduplicate data.
-1. Directing visualizers like Grafana to the aggregator instead of the Prometheus replicas.
+1. Directing visualizers like [Grafana] to query the aggregator instead of any Prometheus replica.
## Further readings