Files
oam/knowledge base/kubernetes/karpenter.md
2025-01-06 17:40:07 +01:00

2.5 KiB

Karpenter

Open-source, just-in-time cloud node provisioner for Kubernetes.

  1. TL;DR
  2. Setup
  3. Further readings
    1. Sources

TL;DR

Karpenter works by:

  1. Watching for unschedulable pods.
  2. Evaluating unschedulable pods' scheduling constraints (resource requests, node selectors, affinities, tolerations, and topology spread constraints).
  3. Provisioning cloud-based nodes meeting the requirements of unschedulable pods.
  4. Deleting nodes when no longer needed.

Karpenter runs as workload on the cluster.

Should one manually delete a Karpenter-provisioned node, Karpenter will gracefully cordon, drain, and shutdown the corresponding instance.
Under the hood, Karpenter adds a finalizer to the node object it provisions. This blocks deletion until all pods are drained and the instance is terminated. This only works for nodes provisioned by Karpenter.

Setup
# Managed NodeGroups
helm --namespace 'kube-system' upgrade --create-namespace \
  --install 'karpenter' 'oci://public.ecr.aws/karpenter/karpenter' --version '1.1.1' \
  --set 'settings.clusterName=myCluster' \
  --set 'settings.interruptionQueue=myCluster' \
  --set 'controller.resources.requests.cpu=1' \
  --set 'controller.resources.requests.memory=1Gi' \
  --set 'controller.resources.limits.cpu=1' \
  --set 'controller.resources.limits.memory=1Gi' \
  --wait

# Fargate
# As per the managed NodeGroups, but with a serviceAccount annotation
helm … \
  --set 'serviceAccount.annotations."eks.amazonaws.com/role-arn"=arn:aws:iam::012345678901:role/myCluster-karpenter'

Setup

Karpenter's controller and webhook deployment are designed to run as a workload on the cluster.

As of 2024-12-24, it only supports AWS and Azure nodes.
As part of the installation process, one will need credentials from the underlying cloud provider to allow Karpenter-managed nodes to be started up and added to the cluster as needed.

Further readings

Sources