Files
oam/knowledge base/prometheus

Prometheus

Metrics gathering and alerting tool.

It collects metrics, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

  1. TL;DR
  2. Components
    1. Extras
  3. Installation
  4. Configuration
    1. Filter metrics
  5. Queries
  6. Storage
    1. Local storage
    2. External storage
    3. Backfilling
  7. Send metrics to other Prometheus servers
  8. Exporters
  9. Management API
    1. Take snapshots of the current data
  10. High availability
  11. Further readings
    1. Sources

TL;DR

Metrics are values that measure something.

Prometheus is designed to store metrics' changes over time.

Prometheus collects metrics by:

  • Actively pulling (scraping) them from configured targets at given intervals.
    Targets shall expose an HTTP endpoint for Prometheus to scrape.
  • Having them pushed to it by clients.
    This is most useful in the event the sources are behind firewalls, or otherwise prohibited from opening ports by security policies.

One can leverage exporters collect metrics from targets that do not natively provide a suitable HTTP endpoint for Prometheus to scrape from.
Exporters are small and purpose-built applications that collect their objects' metrics in different ways, then expose them in an HTTP endpoint in their place.

Prometheus requires a configuration file for scraping settings.

Setup
docker pull 'prom/prometheus'
docker run -p '9090:9090' -v "$PWD/config/dir:/etc/prometheus" -v 'prometheus-data:/prometheus' 'prom/prometheus'

helm repo add 'prometheus-community' 'https://prometheus-community.github.io/helm-charts' \
&& helm repo update 'prometheus-community'
helm show values 'prometheus-community/prometheus'
helm -n 'prometheus' upgrade -i --create-namespace 'prometheus' 'prometheus-community/prometheus'
kubectl -n 'prometheus' get pods -l 'app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus' \
  -o jsonpath='{.items[0].metadata.name}' \
| xargs -I '%%' kubectl -n 'prometheus' port-forward "%%" '9090'
helm --namespace 'prometheus' uninstall 'prometheus'
Usage
# Start the process.
prometheus
prometheus --web.enable-admin-api

# Validate the configuration file.
promtool check config /etc/prometheus/prometheus.yml
docker run --rm -v "$PWD/config.yaml:/etc/prometheus/prometheus.yml:ro" --entrypoint 'promtool' 'prom/prometheus' \
  check config /etc/prometheus/prometheus.yml

# Reload the configuration file *without* restarting the process.
kill -s 'SIGHUP' '3969'
pkill --signal 'HUP' 'prometheus'
curl -i -X 'POST' 'localhost:9090/-/reload'  # if admin APIs are enabled

# Shut down the process *gracefully*.
kill -s 'SIGTERM' '3969'
pkill --signal 'TERM' 'prometheus'

# Push test metrics to a remote.
promtool push metrics 'http://mimir.example.org:8080/api/v1/push'
docker run --rm --entrypoint 'promtool' 'prom/prometheus' push metrics 'http://mimir.example.org:8080/api/v1/push'

Components

Prometheus is composed by its server, the Alertmanager and its exporters.

Alerting rules can be created within Prometheus, and configured to send custom alerts to Alertmanager.
Alertmanager then processes and handles the alerts, including sending notifications through different mechanisms or third-party services.

The exporters can be libraries, processes, devices, or anything else exposing metrics so that they can be scraped by Prometheus.
Such metrics are usually made available at the /metrics endpoint, which allows them to be scraped directly from Prometheus without the need of an agent.

Extras

As a welcomed addition, Grafana can be configured to use Prometheus as a backend of its, in order to provide data visualization and dashboarding functions on the data it provides.

Installation

brew install 'prometheus'
docker run -p '9090:9090' -v './prometheus.yml:/etc/prometheus/prometheus.yml' --name prometheus -d 'prom/prometheus'
Kubernetes
helm repo add 'prometheus-community' 'https://prometheus-community.github.io/helm-charts'
helm -n 'monitoring' upgrade -i --create-namespace 'prometheus' 'prometheus-community/prometheus'

helm -n 'monitoring' upgrade -i --create-namespace --repo 'https://prometheus-community.github.io/helm-charts' \
  'prometheus' 'prometheus'

Access components:

Component From within the cluster
Prometheus server prometheus-server.monitoring.svc.cluster.local:80
Alertmanager prometheus-alertmanager.monitoring.svc.cluster.local:80
Push gateway prometheus-pushgateway.monitoring.svc.cluster.local:80
# Access the prometheus server.
kubectl -n 'monitoring' get pods -l 'app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus' \
  -o jsonpath='{.items[0].metadata.name}' \
| xargs -I {} kubectl -n 'monitoring' port-forward {} 9090

# Access alertmanager.
kubectl -n 'monitoring' get pods -l 'app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=prometheus' \
  -o jsonpath='{.items[0].metadata.name}' \
| xargs -I {} kubectl -n 'monitoring' port-forward {} 9093

# Access the push gateway.
kubectl -n 'monitoring' get pods -l -l "app=prometheus-pushgateway,component=pushgateway" \
  -o jsonpath='{.items[0].metadata.name}' \
| xargs -I {} kubectl -n 'monitoring' port-forward {} 9091

Configuration

Refer Configuration.

Prometheus is configured via both command-line flags and a configuration file.

The command-line flags configure immutable system parameters (e.g. storage locations, amount of data to keep on disk and in memory).
The configuration file is a YAML file that defines everything related to:

  • Scraping jobs and their instances.
  • Which rule files to load.

The default configuration file is at /etc/prometheus/prometheus.yml.

Configuration file example
global:
  scrape_interval: 15s
scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: [ 'localhost:9090' ]
  - job_name: nodes
    static_configs:
      - targets:
          - fqdn:9100
          - host.local:9100
  - job_name: router
    static_configs:
      - targets: [ 'openwrt.local:9100' ]
    metric_relabel_configs:
      - source_labels: [__name__]
        action: keep
        regex: '(node_cpu)'

Prometheus can reload the configuration file without restarting its process by

  • Sending the SIGHUP signal to the process:

    kill -s 'SIGHUP' '3969'
    pkill --signal 'HUP' 'prometheus'
    
  • Sending a POST HTTP request to the /-/reload endpoint.
    Requires the process to start with the --web.enable-lifecycle flag enabled.

If the new configuration is not well-formed, changes will not be applied.
This will also reload any configured rule files.

Filter metrics

Refer How relabeling in Prometheus works, Scrape selective metrics in Prometheus and Dropping metrics at scrape time with Prometheus.

Use metric relabeling configurations to select which series to ingest after scraping:

 scrape_configs:
   - job_name: router
     …
+    metric_relabel_configs:
+      - # do *not* record metrics which name matches the regex
+        # in this case, those which name starts with 'node_disk_'
+        source_labels: [ __name__ ]
+        action: drop
+        regex: node_disk_.*
   - job_name: hosts
     …
+    metric_relabel_configs:
+      - # *only* record metrics which name matches the regex
+        # in this case, those which name starts with 'node_cpu_' with cpu=1 and mode=user
+        source_labels:
+          - __name__
+          - cpu
+          - mode
+        regex: node_cpu_.*1.*user.*
+        action: keep

Queries

Prometheus uses the PromQL query syntax.

All data is stored as time series, each one identified by a metric name (e.g., node_filesystem_avail_bytes for available filesystem space).
Metrics' names can be used in query expressions to select all the time series with that name, and produce an instant vector.

Time series can be filtered using selectors and labels (sets of key-value pairs):

node_filesystem_avail_bytes{fstype="ext4"}
node_filesystem_avail_bytes{fstype!="xfs"}

Square brackets allow selecting a range of samples from the current time backwards:

node_memory_MemAvailable_bytes[5m]

When using time ranges, the returned vector will be a range vector.

Functions can be used to build advanced queries.

Example
100 * (1 - avg by(instance)(irate(node_cpu_seconds_total{job='node_exporter',mode='idle'}[5m])))

advanced query

Labels are used to filter the job and the mode.

node_cpu_seconds_total returns a counter.
The irate() function calculates the per-second rate of change based on the last two data points of the range interval given it as argument.

To calculate the overall CPU usage, the idle mode of the metric is used.

Since the idle percentage of a processor is the opposite of a busy processor, the average irate value is subtracted from 1.

To make it all a percentage, the computed value is multiplied by 100.

Query examples
# Get all allocatable CPU cores where the 'node' attribute matches regex ".*-runners-.*" grouped by node
sum(kube_node_status_allocatable_cpu_cores{node=~".*-runners-.*"}) BY (node)

# FIXME
sum(rate(container_cpu_usage_seconds_total{namespace="gitlab-runners",container="build",pod_name=~"runner.*"}[30s])) by (pod_name,container) /
sum(container_spec_cpu_quota{namespace="gitlab-runners",pod_name=~"runner.*"}/container_spec_cpu_period{namespace="gitlab-runners",pod_name=~"runner.*"}) by (pod_name,container)

Storage

Refer Storage.

Prometheus uses a local, on-disk time series database by default.
It can optionally integrate with remote storage systems.

Local storage

Local storage is not clustered nor replicated. This makes it not arbitrarily scalable or durable in the face of outages.
The use of RAID disks is suggested for storage availability, and snapshots are recommended for backups.

The local storage is not intended to be durable long-term storage and external solutions should be used to achieve extended retention and data durability.

External storage may be used via the remote read/write APIs.
These storage systems vary greatly in durability, performance, and efficiency.

Ingested samples are grouped into blocks of two hours.
Each two-hours block consists of a uniquely named directory. This directory contains:

  • A chunks subdirectory, hosting all the time series samples for that window of time.
    Samples are grouped into one or more segment files of up to 512 MB each by default.
  • A metadata file.
  • An index file.
    This indexes metric names and labels to time series in the chunks directory.

When series are deleted via the API, deletion records are stored in separate tombstones files.
Tombstone files are not deleted immediately from the chunk segments.

The current block for incoming samples is kept in memory and is not fully persisted.
This is secured against crashes by a write-ahead log (WAL) that can be replayed when the Prometheus server restarts.

Write-ahead log files are stored in the wal directory in segments of 128 MB in size.
These files contain raw data that has not yet been compacted.
Prometheus will retain a minimum of three write-ahead log files. Servers may retain more than these three WAL files in order to keep at least two hours of raw data stored.

The server's data directory looks something like this:

./data
├── 01BKGV7JBM69T2G1BGBGM6KB12
│   └── meta.json
├── 01BKGTZQ1SYQJTR4PB43C8PD98
│   ├── chunks
│   │   └── 000001
│   ├── tombstones
│   ├── index
│   └── meta.json
├── 01BKGTZQ1HHWHV8FBJXW1Y3W0K
│   └── meta.json
├── 01BKGV7JC0RY8A6MACW02A2PJD
│   ├── chunks
│   │   └── 000001
│   ├── tombstones
│   ├── index
│   └── meta.json
├── chunks_head
│   └── 000001
└── wal
    ├── 000000002
    └── checkpoint.00000001
        └── 00000000

The initial two-hour blocks are eventually compacted into longer blocks in the background.
Each block will contain data spanning up to 10% of the retention time or 31 days, whichever is smaller.

The retention time defaults to 15 days.
Expired block cleanup happens in the background. It may take up to two hours to remove expired blocks. Blocks must be fully expired before they are removed.

Prometheus stores an average of 1 to 2 bytes per sample.
To plan the capacity of a Prometheus server, one can use the following rough formula:

needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample

To lower the rate of ingested samples, one can (either-or):

  • Reduce the number of scraped time series (fewer targets or fewer series per target).
  • Increase the scrape interval.

Reducing the number of series is likely more effective, due to compression of samples within a series.

Should the local storage become corrupted for whatever reason, the best strategy is to shut down the Prometheus server process, and then remove the entire storage directory. This does mean losing all the stored data.
One can alternatively try removing individual block directories or the wal directory to resolve the problem. Doing so means losing approximately two hours of data per block directory.

Prometheus does not support non-POSIX-compliant filesystems as local storage.
Unrecoverable corruptions may happen.
NFS filesystems (including AWS's EFS) are not supported as, though NFS could be POSIX-compliant, most of its implementations are not.
It is strongly recommended to use a local filesystem for reliability.

If both time and size retention policies are specified, whichever triggers first will take precedence.

External storage

TODO

Backfilling

TODO

Send metrics to other Prometheus servers

Also see How to set up and experiment with Prometheus remote-write.

The remote server must accept incoming metrics.
One way is to have it start with the --web.enable-remote-write-receiver option.

Use the remote_write setting to configure the sender to forward metrics to the receiver:

remote_write:
  - url: http://prometheus.receiver.fqdn:9090/api/v1/write
  - url: https://aps-workspaces.eu-east-1.amazonaws.com/workspaces/ws-01234567-abcd-1234-abcd-01234567890a/api/v1/remote_write
    queue_config:
      max_samples_per_send: 1000
      max_shards: 100
      capacity: 1500
    sigv4:
      region: eu-east-1

Exporters

Refer Exporters and integrations.

Exporters are libraries and web servers that gather metrics from third-party systems, then either send them to Prometheus servers or expose them as Prometheus metrics.

They are used in cases where it is not feasible to instrument systems to send or expose Prometheus metrics directly.

Exporters of interest:

Exporter Summary
BOINC exporter Metrics for BOINC client
Node exporter OS-related metrics
SNMP exporter Basically SNMP in Prometheus format

Management API

Take snapshots of the current data

Requires the TSDB APIs to be enabled (--web.enable-admin-api).

Use the snapshot API endpoint to create snapshots of all current data into snapshots/<datetime>-<rand> under the TSDB's data directory and return that directory as response.

It will optionally skip including data that is only present in the head block, and which has not yet been compacted to disk.

POST /api/v1/admin/tsdb/snapshot
PUT /api/v1/admin/tsdb/snapshot

URL query parameters:

  • skip_head=<bool>: skip data present in the head block. Optional.

Examples:

$ curl -X 'POST' 'http://localhost:9090/api/v1/admin/tsdb/snapshot'
{
  "status": "success",
  "data": {
    "name": "20171210T211224Z-2be650b6d019eb54"
  }
}

The snapshot now exists at <data-dir>/snapshots/20171210T211224Z-2be650b6d019eb54

High availability

Typically achieved by:

  1. Running multiple Prometheus replicas.
    Replicas could each focus on a subset of the whole data, or just scrape the targets multiple times and leave the deduplication to other tools.
  2. Running a separate AlertManager instance.
    This would handle alerts from all the Prometheus instances, automatically managing eventually duplicated data.
  3. Using tools like Thanos, Cortex, or Grafana's Mimir to aggregate and deduplicate data.
  4. Directing visualizers like Grafana to query the aggregator instead of any Prometheus replica.

Further readings

Sources