Files
oam/knowledge base/awx.md
2025-09-29 12:03:07 +02:00

48 KiB

AWX

Web-based UI, REST API, and task engine built on top of Ansible.
Part of the upstream projects for the Red Hat Ansible Automation Platform.

  1. TL;DR
  2. Gotchas
  3. Setup
    1. Deployment
    2. Update
    3. Removal
    4. Testing
    5. Executing jobs
  4. Attribute inheritance and overriding
    1. Variables inheritance and overriding
  5. Elevating privileges in tasks
  6. Workflow automation
    1. Pass data between workflow Nodes
  7. API
  8. Further readings
    1. Sources

TL;DR

Tip

When in doubt about AWX's inner workings, consider asking Devin.

Gotchas

  • When one does not define values in a resource during its creation, the resource will default to the settings of the same name defined by the underlying dependency (if any).
    E.g.: not setting the job_type parameter in a schedule (or setting it to null) makes the job it starts use the job_type setting defined in the job template that the schedule references.
    Refer Attribute inheritance and overriding.

  • Extra variables configured in job templates will take precedence over the ones defined in the playbook and its tasks, as if they were given to the ansible-playbook command for the job using its -e, --extra-vars option.

    These variables will have the highest precedence of all variables, and as such it is their value that will be used throughout the whole execution. They will not be overridden by any other definition for similarly named variables (not at play, host, block nor task level; not even the set_facts module will override them).
    Refer Variables inheritance and overriding.

  • Once a variable is defined in a job template, it will be passed to the ansible command for the job, even if its value is set to null (it will be an empty string).

    When launching a job that allows for variables editing, the edited variables will be merged on top of the initial setting.
    As such, values configured in the job template can at most be overridden, but never deleted. They also cannot be set to null, since null values in the override will not be considered in the merge, resulting in the job template's predefined value being picked.

  • Consider using only AMD64 nodes to host the containers for AWX instances.

    As of 2024-04-11, AWX does not appear to provide ARM64 images for all its containers.
    One'll need to build their own missing ARM64 images and specify those during deployment. Good luck with that!

  • K8S tolerations set in AWX custom resources only affect K8S-based AWX instances' deployments.
    They are not applied to other resources like automation Jobs.

    Job-related specific K8S settings need to be configured in the pod_spec_override attribute of Instance Groups of type Container Group.
    Refer Executing Jobs.

  • Playbooks that use the vars_prompt key, but do not receive the corresponding values through job templates' extra_vars, will cause AWX runs to hang by waiting for user input in an unreachable TTY.
    Consider avoiding using vars_prompt in playbooks that need to be run by AWX, or ensuring that those variables are provided ahead of time.

Setup

Deployment

Starting from version 18.0, the AWX Kubernetes Operator is the preferred way to deploy AWX instances.
It is meant to provide a Kubernetes-native installation method for AWX via the AWX Custom Resource Definition (CRD).

Deploying AWS instances is just a matter of:

  1. Installing the operator on the K8S cluster.
    Make sure to include Ansible's CRDs.
  2. Create a resource of kind AWX.

Whenever a resource of the AWX kind is created, the kubernetes operator executes an Ansible role that creates all the other resources an AWX instance requires to start in the cluster.
See Iterating on the installer without deploying the operator.

The operator can be configured to automatically deploy a default AWX instance once running, but its input options are limited. This prevents changing specific settings for the AWX instance one might need to set.
Creating resources of the AWX kind, instead, allows to include their specific configuration, and hence for more of its settings to be customized. It should™ also be less prone to deployment errors.

Requirements:

  • An existing K8S cluster with AMD64 nodes (see Gotchas).
  • A DB instance, either in the cluster or external to it.
    If internal, one shall be able to create PersistentVolumeClaims and PersistentVolumes in the cluster for it (unless data persistence is not a wanted feature).
  • The ability for the cluster to create load balancers (if setting the service type to load balancer).
Deploy the operator with kustomize
$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

# Specify the version tag to use
/tmp/awx$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
  - github.com/ansible/awx-operator/config/default?ref=2.14.0
    # https://github.com/ansible/awx-operator/releases
EOF

# Start the operator
/tmp/awx$ kubectl apply -k '.'
namespace/awx created
…
deployment.apps/awx-operator-controller-manager created
/tmp/awx$ kubectl -n 'awx' get pods
NAME                                              READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-8b7dfcb58-k7jt8   2/2     Running   0          10m
Deploy the operator with helm
# Add the operator's repository.
$ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/'
"awx-operator" has been added to your repositories
$ helm repo update 'awx-operator'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "awx-operator" chart repository
Update Complete. ⎈Happy Helming!⎈

$ helm search repo 'awx-operator'
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
awx-operator/awx-operator       2.14.0          2.14.0          A Helm chart for the AWX Operator

# Install the operator.
$ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0'
Release "my-awx-operator" does not exist. Installing it now.
NAME: my-awx-operator
LAST DEPLOYED: Mon Apr  8 15:34:00 2024
NAMESPACE: awx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 2.14.0
$ kubectl -n 'awx' get pods
NAME                                               READY   STATUS      RESTARTS   AGE
awx-operator-controller-manager-75b667b745-g9g9c   2/2     Running     0          17m
Deploy the operator with a kustomized Helm chart
$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

/tmp/awx$ cat <<EOF > 'namespace.yaml'
---
apiVersion: v1
kind: Namespace
metadata:
  name: awx
EOF
/tmp/awx$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
helmCharts:
  - name: awx-operator
    repo: https://ansible.github.io/awx-operator/
    version: 2.19.0
    releaseName: awx-operator
    includeCRDs: true  # Important. Not namespaced. Watch out upon removal.
resources:
  - namespace.yaml
EOF

# Start the operator
/tmp/awx$ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/'
/tmp/awx$ kubectl kustomize --enable-helm '.' | kubectl apply -f -
namespace/awx created
…
deployment.apps/awx-operator-controller-manager created
/tmp/awx$ kubectl -n 'awx' get pods
NAME                                              READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-8b7dfcb58-k7jt8   2/2     Running   0          10m

Once the operator is installed, AWX instances can be created by leveraging the AWX CRD.

Basic definition for a quick testing instance
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  no_log: false
  service_type: NodePort
  node_selector: |
    kubernetes.io/arch: amd64
Definition for an instance on AWS' EKS
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  no_log: false
  admin_email: infra@example.org
  postgres_configuration_secret: awx-postgres-configuration
  node_selector: |
    kubernetes.io/arch: amd64
  service_type: LoadBalancer
  ingress_type: ingress
  ingress_annotations: |
    kubernetes.io/ingress.class: alb

Due to the operator being the one creating its resources, one's control is limited to what one can define in the AWX resource's spec key.
See the installer role's defaults and any page under the Advanced configuration section in the operator's documentation for details.

Useful specs:

Spec Description Reason
no_log: false See resource creation tasks' output in the operators'logs Debug
node_selector: … Select nodes to run on Use only specific nodes (see warning at the beginning)
Deploy AWX instances with kubectl
$ cd '/tmp/awx'
/tmp/awx$ kubectl apply -f 'awx-demo.yaml'
Deploy AWX instances with kustomize
$ cd '/tmp/awx'

/tmp/awx$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml'
/tmp/awx$ kubectl apply -k '.'
Deploy AWX instances using the operator's helm chart's integrated definition
# Update the operator by telling it to also deploy the AWX instance.
$ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' \
  --set 'AWX.enabled=true' --set 'AWX.name=awx-demo'
Release "my-awx-operator" has been upgraded. Happy Helming!
NAME: my-awx-operator
LAST DEPLOYED: Mon Apr  8 15:37:47 2024
NAMESPACE: awx
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 2.14.0
$ kubectl -n 'awx' get pods
NAME                                               READY   STATUS      RESTARTS   AGE
awx-demo-migration-24.1.0-qhbq2                    0/1     Completed   0          12m
awx-demo-postgres-15-0                             1/1     Running     0          13m
awx-demo-task-87756dfbc-chx9t                      4/4     Running     0          12m
awx-demo-web-69d6d5d6c-wdxlv                       3/3     Running     0          12m
awx-operator-controller-manager-75b667b745-g9g9c   2/2     Running     0          17m

The default user is admin.
Get the password from the {instance}-admin-password secret:

$ kubectl -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode
L2ZUgNTwtswVW3gtficG1Hd443l3Kicq

Connect to the instance once it is up:

kubectl -n 'awx' port-forward 'service/awx-service' '8080:http'
open 'http://localhost:8080'

Update

The documentation suggests to:

  1. Temporarily set up the operator to automatically update any AWX instance it manages.
  2. Delete the AWX instance resource.
    This will force the operator to pull fresh, updated images for the new deployment.
  3. Restore the operator's settings to the previous version.

Removal

Remove the AWX resource associated to the instance to delete it:

$ kubectl delete awx 'awx-demo'
awx.awx.ansible.com "awx-demo" deleted

Remove the operator if not needed anymore:

# Using `kustomize`
kubectl delete -k '/tmp/awx'

# Using `helm`
helm -n 'awx' uninstall 'my-awx-operator'

# Using the kustomized helm chart
kubectl kustomize --enable-helm '.' | kubectl delete -f -

Eventually, remove the namespace too to clean all things up:

kubectl delete ns 'awx'

Testing

Run: follow the basic installation guide

Guide

1. ARM, Mac OS X, minikube, kustomize: failed: ARM images for AWX not available
$ minikube start --cpus=4 --memory=6g --addons=ingress
…
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

$ # There was no ARM version of the 'kube-rbac-proxy' image upstream, so it was impossible to just use the `make deploy`
$ # command as explained in the basic install.
$ # Defaulting to use 'quay.io' as repository as the ARM version of that image is available there.
$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
  - github.com/ansible/awx-operator/config/default?ref=2.14.0
    # https://github.com/ansible/awx-operator/releases
images:
  - name: quay.io/ansible/awx-operator
    newTag: 2.14.0   # same as awx-operator in resources
  - name: gcr.io/kubebuilder/kube-rbac-proxy
    # no ARM version upstream, defaulting to quay.io
    newName: quay.io/brancz/kube-rbac-proxy
    newTag: v0.16.0-arm64
EOF
$ kubectl apply -k '.'
namespace/awx created
…
deployment.apps/awx-operator-controller-manager created
$ kubectl -n 'awx' get pods
NAME                                              READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-8b7dfcb58-k7jt8   2/2     Running   0          3m42s

$ cat <<EOF > 'awx-demo.yaml'
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport
EOF
$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml'
$ kubectl apply -k '.'  # this failed because awx has no ARM images yet

$ # Fine. I'll do it myself.
$ git clone 'https://github.com/ansible/awx.git'
$ cd 'awx'
$ make awx-kube-build
…
ERROR: failed to solve: process "/bin/sh -c make sdist && /var/lib/awx/venv/awx/bin/pip install dist/awx.tar.gz" did not complete successfully: exit code: 2
make: *** [awx-kube-build] Error 1
$ # (ノಠ益ಠ)ノ彡┻━┻
2. AMD64, OpenSUSE Leap 15.5, minikube, kustomize
$ minikube start --cpus=4 --memory=6g --addons=ingress
😄  minikube v1.29.0 on Opensuse-Leap 15.5
…
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

$ # Simulating the need to use a custom repository for the sake of testing, so I cannot just use the `make deploy`
$ # command as explained in the basic install.
$ # In this case, the repository will be 'quay.io'.
$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
  - github.com/ansible/awx-operator/config/default?ref=2.14.0
    # https://github.com/ansible/awx-operator/releases
images:
  - name: quay.io/ansible/awx-operator
    newTag: 2.14.0   # same as awx-operator in resources
EOF
$ minikube kubectl -- apply -k '.'
namespace/awx created
…
deployment.apps/awx-operator-controller-manager created
$ minikube kubectl -- -n 'awx' get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-75b667b745-hjfc7   2/2     Running   0          3m43s

$ cat <<EOF > 'awx-demo.yaml'
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport
EOF
$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml'
$ minikube kubectl -- apply -k '.'
serviceaccount/awx-operator-controller-manager unchanged
…
deployment.apps/awx-operator-controller-manager unchanged
awx.awx.ansible.com/awx-demo created
$ minikube kubectl -- -n 'awx' get podsminikube kubectl -- -n 'awx' get pods
NAME                                               READY   STATUS      RESTARTS   AGE
awx-demo-migration-24.1.0-kqxcj                    0/1     Completed   0          9s
awx-demo-postgres-15-0                             1/1     Running     0          61s
awx-demo-task-7fcbb46c5d-ckf9d                     4/4     Running     0          48s
awx-demo-web-58668794c8-rfd7d                      3/3     Running     0          49s
awx-operator-controller-manager-75b667b745-hjfc7   2/2     Running     0          93s

$ # Default user is 'admin'.
$ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode
L2ZUgNTwtswVW3gtficG1Hd443l3Kicq
$ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url)

$ minikube kubectl -- delete -k '.'

Run: follow the helm installation guide

Guide

1. AMD64, OpenSUSE Leap 15.5, minikube, helm
$ minikube start --cpus=4 --memory=6g --addons=ingress
😄  minikube v1.29.0 on Opensuse-Leap 15.5
…
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/'
"awx-operator" has been added to your repositories
$ helm repo update 'awx-operator'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "awx-operator" chart repository
Update Complete. ⎈Happy Helming!⎈

$ helm search repo 'awx-operator'
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
awx-operator/awx-operator       2.14.0          2.14.0          A Helm chart for the AWX Operator

$ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0'
Release "my-awx-operator" does not exist. Installing it now.
NAME: my-awx-operator
LAST DEPLOYED: Mon Apr  8 15:34:00 2024
NAMESPACE: awx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 2.14.0
$ minikube kubectl -- -n 'awx' get pods
NAME                                              READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-8b7dfcb58-k7jt8   2/2     Running   0          3m

$ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' \
  --set 'AWX.enabled=true' --set 'AWX.name=awx-demo'
Release "my-awx-operator" has been upgraded. Happy Helming!
NAME: my-awx-operator
LAST DEPLOYED: Mon Apr  8 15:37:47 2024
NAMESPACE: awx
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
AWX Operator installed with Helm Chart version 2.14.0
$ minikube kubectl -- -n 'awx' get pods
NAME                                              READY   STATUS      RESTARTS   AGE
awx-demo-migration-24.1.0-qhbq2                   0/1     Completed   0          12m
awx-demo-postgres-15-0                            1/1     Running     0          13m
awx-demo-task-87756dfbc-chx9t                     4/4     Running     0          12m
awx-demo-web-69d6d5d6c-wdxlv                      3/3     Running     0          12m
awx-operator-controller-manager-8b7dfcb58-k7jt8   2/2     Running     0          17m

$ # Default user is 'admin'.
$ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode
PoU9pFR2J5oFqymgX9I3I8swFgfZVkam
$ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url)

$ helm -n 'awx' uninstall 'my-awx-operator'
$ minikube kubectl -- delete ns 'awx'

Run: kustomized helm chart

Warning

Remember to include the CRDs from the helm chart.

1. AMD64, OpenSUSE Leap 15.5, minikube
$ minikube start --cpus=4 --memory=6g --addons=ingress
😄  minikube v1.29.0 on Opensuse-Leap 15.5
…
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

$ cat <<EOF > 'namespace.yaml'
---
apiVersion: v1
kind: Namespace
metadata:
  name: awx
EOF
$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
  - namespace.yaml
helmCharts:
  - name: awx-operator
    repo: https://ansible.github.io/awx-operator/
    version: 2.14.0
    releaseName: awx-operator
    includeCRDs: true
EOF
$ minikube kubectl -- apply -f <(minikube kubectl -- kustomize --enable-helm)
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
…
deployment.apps/awx-operator-controller-manager created
$ minikube kubectl -- -n 'awx' get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-787d4945fb-fdffx   2/2     Running   0          3m36s

$ cat <<EOF > 'awx-demo.yaml'
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport
EOF
$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml'
$ minikube kubectl -- apply -f <(minikube kubectl -- kustomize --enable-helm)
namespace/awx unchanged
…
deployment.apps/awx-operator-controller-manager unchanged
awx.awx.ansible.com/awx-demo created
$ minikube kubectl -- -n 'awx' get pods
NAME                                               READY   STATUS      RESTARTS   AGE
awx-demo-migration-24.1.0-zwv8w                    0/1     Completed   0          115s
awx-demo-postgres-15-0                             1/1     Running     0          10m
awx-demo-task-9c4655cb9-cmz87                      4/4     Running     0          8m3s
awx-demo-web-77f65cc65f-qhqrm                      3/3     Running     0          8m4s
awx-operator-controller-manager-787d4945fb-fdffx   2/2     Running     0          14m

$ # Default user is 'admin'.
$ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode
DgHIaA9onZj106osEmvECigzsBqutHqI
$ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url)

$ minikube kubectl -- delete -f <(minikube kubectl -- kustomize --enable-helm)
1. AMD64, Mac OS X, EKS
$ mkdir -p '/tmp/awx'
$ cd '/tmp/awx'

$ cat <<EOF > 'namespace.yaml'
---
apiVersion: v1
kind: Namespace
metadata:
  name: awx
EOF
$ cat <<EOF > 'kustomization.yaml'
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
  - namespace.yaml
helmCharts:
  - name: awx-operator
    repo: https://ansible.github.io/awx-operator/
    version: 2.19.1
    releaseName: awx-operator
    includeCRDs: true
EOF
$ kubectl kustomize --enable-helm | kubectl apply -f -
namespace/awx created
…
deployment.apps/awx-operator-controller-manager created
$ kubectl get pods -n 'awx'
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-3361cfab38-tdgt3   2/2     Running   0          13s

$ cat <<EOF > 'awx-demo.yaml'
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  admin_email: me@example.org
  no_log: false
  node_selector: |
    kubernetes.io/arch: amd64
  service_type: LoadBalancer
  ingress_type: ingress
  ingress_annotations: |
    kubernetes.io/ingress.class: alb
EOF
$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml'
$ kubectl kustomize --enable-helm | kubectl apply -f -
namespace/awx unchanged
…
deployment.apps/awx-operator-controller-manager unchanged
awx.awx.ansible.com/awx-demo created
$ kubectl -n 'awx' get pods
NAME                                               READY   STATUS      RESTARTS   AGE
awx-demo-migration-24.1.0-zwv8w                    0/1     Completed   0          115s
awx-demo-postgres-15-0                             1/1     Running     0          10m
awx-demo-task-8e34efc56-w5rc5                      4/4     Running     0          8m3s
awx-demo-web-545gbdgg7b-q2q4m                      3/3     Running     0          8m4s
awx-operator-controller-manager-3361cfab38-tdgt3   2/2     Running     0          14m

$ # Default user is 'admin'.
$ kubectl -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode
IDwYOgL9k2ckaXmqMm6PT4d6TXdJcocd
$ kubectl -n 'awx' get ingress 'awx-demo-ingress' -o jsonpath='{.status.loadBalancer.ingress[*].hostname}' \
  | xargs -I{} open http://{}

$ kubectl kustomize --enable-helm | kubectl delete -f -
namespace "awx" deleted
…
awx.awx.ansible.com "awx-demo" deleted
deployment.apps "awx-operator-controller-manager" deleted

Executing jobs

Unless explicitly defined in Job Templates, Schedules, or other resources that allow specifying the instance_groups key, Jobs using a containerized execution environment will execute in the default container group.

Normally, the default container group does not limit where a Job's pod is executed, nor limits its assigned resources.
By explicitly configuring this container group, one can change the settings for Jobs that do not ask for custom executors.
E.g., one could set affinity and tolerations to assign Jobs to specific nodes by default, and set specific default resource limits.

# ansible playbook
- name: Configure instance group 'default'
  tags: configure_instance_group_default_spot
  awx.awx.instance_group:
    name: default
    is_container_group: true
    pod_spec_override: |-
      apiVersion: v1
      kind: Pod
      metadata:
        namespace: awx
      spec:
        serviceAccountName: default
        automountServiceAccountToken: false
        containers:
          - image: 012345678901.dkr.ecr.eu-west-1.amazonaws.com/infrastructure/awx-ee:latest
            name: worker
            args:
              - ansible-runner
              - worker
              - '--private-data-dir=/runner'
            resources:
              requests:
                cpu: 250m
                memory: 100Mi
              limits:
                cpu: 1830m
                memory: 1425Mi
        tolerations:
          - key: example.org/reservation.app
            operator: Equal
            value: awx
            effect: NoSchedule
          - key: awx.example.org/reservation.component
            operator: Equal
            value: job
            effect: NoSchedule
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                    - key: example.org/reservation.app
                      operator: In
                      values:
                        - awx
                    - key: awx.example.org/reservation.component
                      operator: In
                      values:
                        - job
            preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 1
                preference:
                  matchExpressions:
                    - key: awx/component
                    - key: eks.amazonaws.com/capacityType
                      operator: In
                      values:
                        - SPOT
- name: Configure instance group 'ondemand'
  tags: configure instance_group_ondemand
  awx.awx.instance_group:
    name: ondemand
    is_container_group: true
    pod_spec_override: |-
      apiVersion: v1
      kind: Pod
      metadata:
        namespace: awx
      spec:
        serviceAccountName: default
        automountServiceAccountToken: false
        containers:
          - image: 012345678901.dkr.ecr.eu-west-1.amazonaws.com/infrastructure/awx-ee:latest
            name: worker
            args:
              - ansible-runner
              - worker
              - '--private-data-dir=/runner'
            resources:
              requests:
                cpu: 250m
                memory: 100Mi
              limits:
                cpu: 1830m
                memory: 1425Mi
        tolerations:
          - key: example.org/reservation.app
            operator: Equal
            value: awx
            effect: NoSchedule
          - key: awx.example.org/reservation.component
            operator: Equal
            value: job
            effect: NoSchedule
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                    - key: example.org/reservation.app
                      operator: In
                      values:
                        - awx
                    - key: awx.example.org/reservation.component
                      operator: In
                      values:
                        - job
                    - key: eks.amazonaws.com/capacityType
                      operator: In
                      values:
                        - ON_DEMAND

Attribute inheritance and overriding

Some AWX-specific resources allow configuring similar attributes.
E.g., Schedules, Workflow Job Template Nodes and Job Templates all define diff_mode, job_type, and other properties.

This is usually true for resource types that can reference another resource type.
It is meant to allow parent resource to override properties of their child through hierarchical inheritance.
E.g.:

  • Schedules and Workflow Job Template Nodes can both reference Job Templates via the unified_job_template key.
  • Schedules and Workflow Job Template Nodes specifying attributes like diff_mode or job_type will override the attribute of the same name specified by the Job Templates they reference.

Scheduled Jobs' attributes can override referenced Job Templates properties:

- awx.awx.job_template:
    organization: ExampleOrg
    name: Some job
    
    inventory: EC2 instances by Instance ID
    execution_environment: ExampleOrg-EE
    credentials:
      - SSM User         # required to use SSM
      - AWX Central Key  # required to 'become' in tasks
    project: Some project
    playbook: some_playbook.yml
    job_type: check
    verbosity: 3
    diff_mode: true
- awx.awx.schedule:
    organization: ExampleOrg
    unified_job_template: Some Job
    enabled: true
    
    job_type: run     # spawned jobs override the job template's "job_type" property
    verbosity: 0      # spawned jobs override the job template's "verbosity" property
    diff_mode: false  # spawned jobs override the job template's "diff_mode" property
flowchart LR
  job_template("Job Template")
  schedule("Schedule")

  schedule --> job_template

This effect is applied recursively to the full reference chain.

Scheduled Jobs' attributes can override referenced Workflow Job Templates' properties, which are propagated to the Job Templates used by its Nodes:

- awx.awx.job_template:
    organization: ExampleOrg
    name: Some job
    
    inventory: EC2 instances by Instance ID
    execution_environment: ExampleOrg-EE
    credentials:
      - SSM User         # required to use SSM
      - AWX Central Key  # required to 'become' in tasks
    project: Some project
    playbook: some_playbook.yml
    job_type: check
    verbosity: 3
    diff_mode: true
- awx.awx.workflow_job_template:
    organization: ExampleOrg
    name: Some workflow
    
- awx.awx.workflow_job_template_node:
    workflow_job_template: Some workflow
    unified_job_template: Some job
    
    job_type: check  # spawned jobs override the job template's "job_type" property
    verbosity: 3     # spawned jobs override the job template's "verbosity" property
    diff_mode: true  # spawned jobs override the job template's "diff_mode" property
- awx.awx.schedule:
    organization: ExampleOrg
    unified_job_template: Some workflow
    enabled: true
    
    job_type: run     # spawned workflows override the node's "job_type" property
    verbosity: 0      # spawned workflows override the node's "verbosity" property
    diff_mode: false  # spawned workflows override the node's "diff_mode" property
flowchart LR
  job_template("Job Template")
  workflow_job_template("Workflow Job Template")
  workflow_node("Workflow Node")
  schedule("Schedule")

  schedule --> workflow_job_template --> workflow_node --> job_template

Variables inheritance and overriding

Variables inheritance works in a similar fashion to Attribute inheritance and overriding, but is specific to the extra_vars key (A.K.A. prompts in the Web UI).

Also see Extra variables.

Variables defined in parent AWX resources recursively override those defined in children AWX resources and, by extension, Ansible resources (playbooks, blocks, tasks, etc).
Variables defined in ancestors cannot be overridden by any of the children in the chain, nor are affected by any Ansible module or component during playbook execution.
The result is effectively as if they were passed down with the --extra-vars CLI option. Refer Ansible variables.

Warning

Once a variable is defined in a Job Template or parent resources, it will be passed to the Ansible command during Job execution, even if its value is set to null (it will just be an empty string).
This also means the values configured in children resources can at most be overridden, but never deleted.

This limitation is enforced to try and ensure predictable behavior, with higher-level configurations remaining consistent across the whole execution chain.

Warning

The AWX API has a specific restriction that does not consider nullified values for extra variables in resources that allow their definition.
If a resource does not specify a variable with a value, that variable should just not be provided in the payload.

Elevating privileges in tasks

AWX requires one to configure specific settings throughout its resources in order to be able to successfully use become and privileges-related keys in playbooks.

  1. The playbook must be configured to elevate privileges as per normal Ansible operations.

    - name: Do something by escalating privileges
      hosts: all
      become: true
      tasks: []
    
  2. The Job Template referencing the playbook must have the Privilege Escalation option enabled.

    This corresponds to providing the --become flag when running the playbook.

  3. The Credential used in the Job (either in the Job Template or whatever overrides them) must specify a user that is able to run sudo (or whatever become_method the playbook uses).

    Important

    Should the become_method require a password, one must also supply that password in the Credential.

Workflow automation

Refer How to use workflow job templates in Ansible, Workflow job templates and Workflows.
Also see Passing Ansible variables in Workflows using set_stats.

Workflow Job Templates coordinate the linking and execution of multiple resources by:

  • Synchronizing repositories with code for Projects.
  • Synchronize Inventories.
  • Having different Jobs run sequentially or in parallel.
  • Running Jobs based on the success or failure of one or more previous Jobs.
  • Requesting an admin's approval to proceed with one or more executions.

Workflow Job Templates define their every action as a Node resource.

Creation process
flowchart LR
  job_template("Job Template")
  playbook("Playbook")
  project("Project")
  workflow_job_template("Workflow Job Template")
  workflow_node("Workflow Node")
  schedule("Schedule")

  playbook --> project --> job_template --> workflow_node --> workflow_job_template --> schedule

All the playbooks used in the workflow must be visible to AWX, meaning that one or more projects containing them must be already configured in the instance.

Workflows need nodes to refer. Nodes reference a job template, which in turn refer a playbook to run.

The AWX UI does not allow creating nodes directly, but it can be done via the visualizer.

  1. Open Resources > Templates in the sidebar.
  2. Click on the Add button and choose Add job template to add every job template that is needed.
    Repeat as required.
  3. Click on the Add button and choose Add workflow template.
  4. Fill in the form with the resources all nodes should share, and save.
    The visualizer will open.
  5. In the visualizer, create the needed nodes.

When creating nodes via the awx.awx.workflow_job_template_node module, nodes link by referencing the next step in the workflow.
As such, Nodes must be defined last-to-first in playbooks.

- awx.awx.workflow_job_template_node:
    workflow_job_template: Some workflow
    identifier: Next action
    
- awx.awx.workflow_job_template_node:
    workflow_job_template: Some workflow
    identifier: Previous action
    
    success_nodes:
      - Next action

When needing to use specific variables only in some Nodes within a workflow, consider:

  1. Specifying those variable only in the Nodes that require them, and not at the Workflow Job Template's level.
    This prevents them to the overridden by values set at the Workflow Job Template's level.

    Workflow Job Template:

    -assume_role_arn: 'arn:aws:iam::012345678901:role/PowerfulRole'
    

    Node:

    +assume_role_arn: 'arn:aws:iam::012345678901:role/PowerfulRole'
     rds_db_instance_identifier: some-db-to-create
    
  2. Designing playbooks to handle variables' presence using Ansible's conditionals, defaults, and facts by using play- or task-specific variables, and populating them with the input from workflows.

    - hosts: all
      vars:
        playbook__assume_role_arn: "{{ role_arn }}"
      tasks:
        - vars:
            do_something__rds_instance_identifier: "{{ rds_instance_identifier }}"
          amazon.aws.rds_instance: 
    

    Alternatively, enabling jinja evaluation at any level, then using jinja expressions to populate nodes' extra vars for differently named variables.
    This is especially useful when passing data between Nodes.

    Node's extra_vars:

    do_something__rds_instance_identifier: "{{ rds_instance_identifier }}"
    
  3. Using separate Workflow Job Templates altogether, when fundamentally different variable sets are needed.

Pass data between workflow Nodes

Refer Passing Ansible variables in Workflows using set_stats.

Leverage the set_stats builtin module.

Important

The artifact system requires Ansible >= v2.2.1.0-0.3.rc3 and the default set_stats parameters per_host: false to work correctly with AWX.

When using set_stats in a workflow, AWX saves the pairs configured in the module's data parameter as artifact.
The workflow system implements cumulative artifact inheritance, where artifacts flow down through the workflow graph. Artifacts are available to all the nodes that are descendants of the one that created it, and not only to the node that immediately follows in the flow.

  1. When any job uses set_stats, AWX stores pairs in the module's data parameter as artifacts.
    All artifacts become available to all descendant nodes in the workflow.

    I.E: suppose having a workflow like follows:

    Node A* → Node B* → Node C
            ↘ Node D
    
    * = creates artifacts
    

    Both Node C and Node D will receive the artifacts created by Node A, but only Node C will also receive any artifacts created by Node B.

  2. Child nodes of the workflow receive the cumulative artifacts from all their ancestor nodes, with the specific rule that children's artifacts overwrite parent's ones.

    I.E.: suppose having a workflow which path is GrandparentParentChild, where both Grandparent and Parent generate artifact.
    Parent's artifacts will overwrite any conflicting keys from Grandparent when passed to the Child node.

Warning

Artifacts are passed as extra_vars to subsequent nodes.
This gives them higher precedence than the job template's default variables.

Example

Considering a workflow where Node1 needs to pass data to Node2:

  1. Playbook for Node1:

    ---
    - name: Get an AWS S3 object's information and pass them along
      hosts: [ … ]
      tasks: [ … ]
      post_tasks:
        - name: Pass the S3 object's information along when found
          tags:
            - always  # important if one plans to test workflows by leveraging tags
            - pass_data_along
          when: s3_object_info is defined
          ansible.builtin.set_stats:
            data:
              s3_object_info: "{{ s3_object_info }}"
    
  2. Playbook for Node2:

    ---
    - name: Do something knowing an AWS S3 object exists because it got passed along
      hosts: [ … ]
      pre_tasks:
        - name: Ensure the S3 object exists beforehand and is in the STANDARD storage tier
          tags:
            - always  # important if one plans to test workflows by leveraging tags
            - ensure_s3_object_is_usable
          ansible.builtin.assert:
            that:
              - s3_object_info.object_data.content_length | default(0) > 0
              - s3_object_info.object_data.storage_class | default('') == 'STANDARD'
      tasks: [ … ]
    

API

Refer AWX API Reference and How to use AWX REST API to execute jobs.

AWX offers the awx client CLI tool:

# Install the 'awx' client
# As of 2025-07-28, Python 3.11 is the last Python version for which the AWX CLI works correctly.
pipx install --python '3.11' 'awxkit'
pip3.11 install --user 'awxkit'

Tip

Normally awx would require setting the configuration every command like so:

awx --conf.host https://awx.example.org --conf.username 'admin' --conf.password 'password' config
awx --conf.host https://awx.example.org --conf.username 'admin' --conf.password 'password' export --schedules

Export settings to environment variables to avoid having to set them on the command line all the time:

export TOWER_HOST='https://awx.example.org' TOWER_USERNAME='admin' TOWER_PASSWORD='password'
# Show the client's configuration
awx config

# List all available endpoints
curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/' | jq '.' -

# List instance groups
awx instance_groups list

# Show instance groups
awx instance_groups get 'default'

# List jobs
awx jobs list
awx jobs list -f 'yaml'
awx jobs list -f 'human' --filter 'name,created,status'
awx jobs list -f 'jq' --filter '.results[] | .name + " is " + .status'

# Show job templates
awx job_templates list
curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/job_templates/' | jq '.' -
awx job_templates get 'Some Job'

# Show notification templates
awx notification_templates list
curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/notification_templates/' | jq '.' -

# Show schedules
awx schedules list
awx schedules --schedules 'schedule-1' 'schedule-n'
awx schedules get 'Some Schedule'
curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/schedules/' | jq '.' -

# Export data
awx export
awx export --job_templates 'job-template-1' 'job-template-n' --schedules
curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/export/' | jq '.' -

Refer AWX Command Line Interface for more information.

Further readings

Sources