Files
oam/knowledge base/kubernetes/karpenter.md
2025-06-24 01:31:47 +02:00

4.5 KiB

Karpenter

Open-source, just-in-time cloud node provisioner for Kubernetes.

  1. TL;DR
  2. Setup
    1. AWS
  3. Further readings
    1. Sources

TL;DR

Runs as workload on the cluster.

Works by:

  1. Watching for unschedulable pods.
  2. Evaluating unschedulable pods' scheduling constraints (resource requests, node selectors, affinities, tolerations, and topology spread constraints).
  3. Provisioning cloud-based nodes meeting the resource requirements and scheduling constraints of unschedulable pods.
  4. Deleting nodes when no longer needed.

Under the hood, Karpenter adds a finalizer to the Kubernetes node object it provisions.
The finalizer blocks node deletion until all pods on it are drained and the instance is terminated.
This only works for nodes provisioned by Karpenter.

Should one manually delete a Karpenter-provisioned Kubernetes node object, Karpenter will gracefully cordon, drain, and shutdown the corresponding cloud instance.

Setup
# Managed NodeGroups
helm --namespace 'kube-system' upgrade --create-namespace \
  --install 'karpenter' 'oci://public.ecr.aws/karpenter/karpenter' --version '1.1.1' \
  --set 'settings.clusterName=myCluster' \
  --set 'settings.interruptionQueue=myCluster' \
  --set 'controller.resources.requests.cpu=1' \
  --set 'controller.resources.requests.memory=1Gi' \
  --set 'controller.resources.limits.cpu=1' \
  --set 'controller.resources.limits.memory=1Gi' \
  --wait

# Fargate
# As per the managed NodeGroups, but with a serviceAccount annotation
helm … \
  --set 'serviceAccount.annotations."eks.amazonaws.com/role-arn"=arn:aws:iam::012345678901:role/myCluster-karpenter'

Setup

Karpenter's controller and webhook deployment are designed to run as a workload on the cluster.

As of 2025-06-08, it only supports AWS and Azure nodes.
As part of the installation process, one will need credentials from the underlying cloud provider to allow Karpenter-managed nodes to be started up and added to the cluster as needed.

Karpenter configuration comes in the form of:

  • A NodePool Custom Resource Definition.
  • A NodeClass Custom Resource Definition.
    Its specifics are defined by the cloud provider's implementation.

A single Karpenter NodePool is capable of handling many different pod shapes.
A cluster may have more than one NodePool.

AWS

Leverages the Karpenter provider for AWS.

Requirements:

  • An IAM Role for Karpenter.
    Required to allow Karpenter to call AWS APIs.
  • An IAM Role and an instance profile for the EC2 instances Karpenter creates.
  • An EKS cluster access entry for the nodes' IAM role.
    Required by the nodes to be able to join the EKS cluster.
  • An SQS queue for Karpenter.
    Required to receive Spot interruption, instance re-balance and other events.

Further readings

Sources