# AWX Web-based UI, REST API, and task engine built on top of [Ansible].
Part of the upstream projects for the [Red Hat Ansible Automation Platform]. 1. [TL;DR](#tldr) 1. [Gotchas](#gotchas) 1. [Setup](#setup) 1. [Deployment](#deployment) 1. [Update](#update) 1. [Removal](#removal) 1. [Testing](#testing) 1. [Executing jobs](#executing-jobs) 1. [Attribute inheritance and overriding](#attribute-inheritance-and-overriding) 1. [Variables inheritance and overriding](#variables-inheritance-and-overriding) 1. [Elevating privileges in tasks](#elevating-privileges-in-tasks) 1. [Workflow automation](#workflow-automation) 1. [Pass data between workflow Nodes](#pass-data-between-workflow-nodes) 1. [API](#api) 1. [Further readings](#further-readings) 1. [Sources](#sources) ## TL;DR > [!tip] > When in doubt about AWX's inner workings, consider [asking Devin][deepwiki ansible/awx]. ## Gotchas - When one does **not** define values in a resource during its creation, the resource will default to the settings of the same name defined by the underlying dependency (if any).
E.g.: not setting the `job_type` parameter in a schedule (or setting it to `null`) makes the job it starts use the `job_type` setting defined in the job template that the schedule references.
Refer [Attribute inheritance and overriding]. - Extra variables configured in job templates will take precedence over the ones defined in the playbook and its tasks, as if they were given to the `ansible-playbook` command for the job using its `-e, --extra-vars` option. These variables will have the **highest** precedence of all variables, and as such it is their value that will be used throughout the whole execution. They will **not** be overridden by any other definition for similarly named variables (not at play, host, block nor task level; not even the `set_facts` module will override them).
Refer [Variables inheritance and overriding]. - Once a variable is defined in a job template, it **will** be passed to the ansible command for the job, even if its value is set to `null` (it will be an empty string). When launching a job that allows for variables editing, the edited variables will be **merged** on top of the initial setting.
As such, values configured in the job template can **at most** be overridden, but **never deleted**. They also cannot be set to `null`, since `null` values in the override will **not** be considered in the merge, resulting in the job template's predefined value being picked. - Consider using only AMD64 nodes to host the containers for AWX instances. As of 2024-04-11, AWX does **not** appear to provide ARM64 images for all its containers.
One'll need to build their own missing ARM64 images and specify those during deployment. Good luck with that! - K8S tolerations set in AWX custom resources only affect K8S-based AWX instances' deployments.
They are **not** applied to other resources like automation Jobs. Job-related specific K8S settings need to be configured in the `pod_spec_override` attribute of Instance Groups of type _Container Group_.
Refer [Executing Jobs]. - Playbooks that use the `vars_prompt` key, but do **not** receive the corresponding values through job templates' `extra_vars`, will cause AWX runs **to hang** by waiting for user input in an unreachable TTY.
Consider avoiding using `vars_prompt` in playbooks that need to be run by AWX, or ensuring that those variables are provided ahead of time. ## Setup ### Deployment Starting from version 18.0, the [AWX Kubernetes Operator][operator's documentation] is the preferred way to deploy AWX instances.
It is meant to provide a Kubernetes-native installation method for AWX via the `AWX` Custom Resource Definition (CRD). Deploying AWS instances is just a matter of: 1. Installing the operator on the K8S cluster.
Make sure to include Ansible's CRDs. 1. Create a resource of kind `AWX`. Whenever a resource of the `AWX` kind is created, the [kubernetes operator] executes an Ansible role that creates all the other resources an AWX instance requires to start in the cluster.
See [Iterating on the installer without deploying the operator]. The operator _can_ be configured to automatically deploy a default AWX instance once running, but its input options are limited. This prevents changing specific settings for the AWX instance one might need to set.
Creating resources of the `AWX` kind, instead, allows to include their specific configuration, and hence for more of its settings to be customized. It should™ also be less prone to deployment errors. Requirements: - An existing K8S cluster with AMD64 nodes (see [Gotchas]). - A DB instance, either in the cluster or external to it.
If internal, one shall be able to create PersistentVolumeClaims and PersistentVolumes in the cluster for it (unless data persistence is not a wanted feature). - The ability for the cluster to create load balancers (if setting the service type to load balancer).
Deploy the operator with kustomize ```sh $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' # Specify the version tag to use /tmp/awx$ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx resources: - github.com/ansible/awx-operator/config/default?ref=2.14.0 # https://github.com/ansible/awx-operator/releases EOF # Start the operator /tmp/awx$ kubectl apply -k '.' namespace/awx created … deployment.apps/awx-operator-controller-manager created /tmp/awx$ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-8b7dfcb58-k7jt8 2/2 Running 0 10m ```
Deploy the operator with helm ```sh # Add the operator's repository. $ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/' "awx-operator" has been added to your repositories $ helm repo update 'awx-operator' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "awx-operator" chart repository Update Complete. ⎈Happy Helming!⎈ $ helm search repo 'awx-operator' NAME CHART VERSION APP VERSION DESCRIPTION awx-operator/awx-operator 2.14.0 2.14.0 A Helm chart for the AWX Operator # Install the operator. $ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' Release "my-awx-operator" does not exist. Installing it now. NAME: my-awx-operator LAST DEPLOYED: Mon Apr 8 15:34:00 2024 NAMESPACE: awx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 2.14.0 $ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-75b667b745-g9g9c 2/2 Running 0 17m ```
Deploy the operator with a kustomized Helm chart ```sh $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' /tmp/awx$ cat < 'namespace.yaml' --- apiVersion: v1 kind: Namespace metadata: name: awx EOF /tmp/awx$ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx helmCharts: - name: awx-operator repo: https://ansible.github.io/awx-operator/ version: 2.19.0 releaseName: awx-operator includeCRDs: true # Important. Not namespaced. Watch out upon removal. resources: - namespace.yaml EOF # Start the operator /tmp/awx$ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/' /tmp/awx$ kubectl kustomize --enable-helm '.' | kubectl apply -f - namespace/awx created … deployment.apps/awx-operator-controller-manager created /tmp/awx$ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-8b7dfcb58-k7jt8 2/2 Running 0 10m ```
Once the operator is installed, AWX instances can be created by leveraging the `AWX` CRD.
Basic definition for a quick testing instance ```yaml --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx-demo spec: no_log: false service_type: NodePort node_selector: | kubernetes.io/arch: amd64 ```
Definition for an instance on AWS' EKS ```yaml --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx spec: no_log: false admin_email: infra@example.org postgres_configuration_secret: awx-postgres-configuration node_selector: | kubernetes.io/arch: amd64 service_type: LoadBalancer ingress_type: ingress ingress_annotations: | kubernetes.io/ingress.class: alb ```
Due to the operator being the one creating its resources, one's control is limited to what one can define in the AWX resource's `spec` key.
See the [installer role's defaults] and any page under the _Advanced configuration_ section in the [operator's documentation] for details. Useful specs: | Spec | Description | Reason | | ------------------ | --------------------------------------------------------- | ------------------------------------------------------ | | `no_log: false` | See resource creation tasks' output in the operators'logs | Debug | | `node_selector: …` | Select nodes to run on | Use only specific nodes (see warning at the beginning) |
Deploy AWX instances with kubectl ```sh $ cd '/tmp/awx' /tmp/awx$ kubectl apply -f 'awx-demo.yaml' ```
Deploy AWX instances with kustomize ```sh $ cd '/tmp/awx' /tmp/awx$ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml' /tmp/awx$ kubectl apply -k '.' ```
Deploy AWX instances using the operator's helm chart's integrated definition ```sh # Update the operator by telling it to also deploy the AWX instance. $ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' \ --set 'AWX.enabled=true' --set 'AWX.name=awx-demo' Release "my-awx-operator" has been upgraded. Happy Helming! NAME: my-awx-operator LAST DEPLOYED: Mon Apr 8 15:37:47 2024 NAMESPACE: awx STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 2.14.0 $ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-demo-migration-24.1.0-qhbq2 0/1 Completed 0 12m awx-demo-postgres-15-0 1/1 Running 0 13m awx-demo-task-87756dfbc-chx9t 4/4 Running 0 12m awx-demo-web-69d6d5d6c-wdxlv 3/3 Running 0 12m awx-operator-controller-manager-75b667b745-g9g9c 2/2 Running 0 17m ```
The default user is `admin`.
Get the password from the `{instance}-admin-password` secret: ```sh $ kubectl -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode L2ZUgNTwtswVW3gtficG1Hd443l3Kicq ``` Connect to the instance once it is up: ```sh kubectl -n 'awx' port-forward 'service/awx-service' '8080:http' open 'http://localhost:8080' ``` ### Update The documentation suggests to: 1. Temporarily set up the operator to automatically update any AWX instance it manages. 1. Delete the AWX instance resource.
This will force the operator to pull fresh, updated images for the new deployment. 1. Restore the operator's settings to the previous version. ### Removal Remove the `AWX` resource associated to the instance to delete it: ```sh $ kubectl delete awx 'awx-demo' awx.awx.ansible.com "awx-demo" deleted ``` Remove the operator if not needed anymore: ```sh # Using `kustomize` kubectl delete -k '/tmp/awx' # Using `helm` helm -n 'awx' uninstall 'my-awx-operator' # Using the kustomized helm chart kubectl kustomize --enable-helm '.' | kubectl delete -f - ``` Eventually, remove the namespace too to clean all things up: ```sh kubectl delete ns 'awx' ``` ### Testing
Run: follow the basic installation guide [Guide][basic install]
1. ARM, Mac OS X, minikube, kustomize: failed: ARM images for AWX not available ```sh $ minikube start --cpus=4 --memory=6g --addons=ingress … 🌟 Enabled addons: storage-provisioner, default-storageclass, ingress 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' $ # There was no ARM version of the 'kube-rbac-proxy' image upstream, so it was impossible to just use the `make deploy` $ # command as explained in the basic install. $ # Defaulting to use 'quay.io' as repository as the ARM version of that image is available there. $ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx resources: - github.com/ansible/awx-operator/config/default?ref=2.14.0 # https://github.com/ansible/awx-operator/releases images: - name: quay.io/ansible/awx-operator newTag: 2.14.0 # same as awx-operator in resources - name: gcr.io/kubebuilder/kube-rbac-proxy # no ARM version upstream, defaulting to quay.io newName: quay.io/brancz/kube-rbac-proxy newTag: v0.16.0-arm64 EOF $ kubectl apply -k '.' namespace/awx created … deployment.apps/awx-operator-controller-manager created $ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-8b7dfcb58-k7jt8 2/2 Running 0 3m42s $ cat < 'awx-demo.yaml' --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx-demo spec: service_type: nodeport EOF $ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml' $ kubectl apply -k '.' # this failed because awx has no ARM images yet $ # Fine. I'll do it myself. $ git clone 'https://github.com/ansible/awx.git' $ cd 'awx' $ make awx-kube-build … ERROR: failed to solve: process "/bin/sh -c make sdist && /var/lib/awx/venv/awx/bin/pip install dist/awx.tar.gz" did not complete successfully: exit code: 2 make: *** [awx-kube-build] Error 1 $ # (ノಠ益ಠ)ノ彡┻━┻ ```
2. AMD64, OpenSUSE Leap 15.5, minikube, kustomize ```sh $ minikube start --cpus=4 --memory=6g --addons=ingress 😄 minikube v1.29.0 on Opensuse-Leap 15.5 … 🌟 Enabled addons: storage-provisioner, default-storageclass, ingress 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' $ # Simulating the need to use a custom repository for the sake of testing, so I cannot just use the `make deploy` $ # command as explained in the basic install. $ # In this case, the repository will be 'quay.io'. $ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx resources: - github.com/ansible/awx-operator/config/default?ref=2.14.0 # https://github.com/ansible/awx-operator/releases images: - name: quay.io/ansible/awx-operator newTag: 2.14.0 # same as awx-operator in resources EOF $ minikube kubectl -- apply -k '.' namespace/awx created … deployment.apps/awx-operator-controller-manager created $ minikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-75b667b745-hjfc7 2/2 Running 0 3m43s $ cat < 'awx-demo.yaml' --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx-demo spec: service_type: nodeport EOF $ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml' $ minikube kubectl -- apply -k '.' serviceaccount/awx-operator-controller-manager unchanged … deployment.apps/awx-operator-controller-manager unchanged awx.awx.ansible.com/awx-demo created $ minikube kubectl -- -n 'awx' get podsminikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-demo-migration-24.1.0-kqxcj 0/1 Completed 0 9s awx-demo-postgres-15-0 1/1 Running 0 61s awx-demo-task-7fcbb46c5d-ckf9d 4/4 Running 0 48s awx-demo-web-58668794c8-rfd7d 3/3 Running 0 49s awx-operator-controller-manager-75b667b745-hjfc7 2/2 Running 0 93s $ # Default user is 'admin'. $ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode L2ZUgNTwtswVW3gtficG1Hd443l3Kicq $ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url) $ minikube kubectl -- delete -k '.' ```

Run: follow the helm installation guide [Guide][helm install on existing cluster]
1. AMD64, OpenSUSE Leap 15.5, minikube, helm ```sh $ minikube start --cpus=4 --memory=6g --addons=ingress 😄 minikube v1.29.0 on Opensuse-Leap 15.5 … 🌟 Enabled addons: storage-provisioner, default-storageclass, ingress 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default $ helm repo add 'awx-operator' 'https://ansible.github.io/awx-operator/' "awx-operator" has been added to your repositories $ helm repo update 'awx-operator' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "awx-operator" chart repository Update Complete. ⎈Happy Helming!⎈ $ helm search repo 'awx-operator' NAME CHART VERSION APP VERSION DESCRIPTION awx-operator/awx-operator 2.14.0 2.14.0 A Helm chart for the AWX Operator $ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' Release "my-awx-operator" does not exist. Installing it now. NAME: my-awx-operator LAST DEPLOYED: Mon Apr 8 15:34:00 2024 NAMESPACE: awx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 2.14.0 $ minikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-8b7dfcb58-k7jt8 2/2 Running 0 3m $ helm -n 'awx' upgrade -i --create-namespace 'my-awx-operator' 'awx-operator/awx-operator' --version '2.14.0' \ --set 'AWX.enabled=true' --set 'AWX.name=awx-demo' Release "my-awx-operator" has been upgraded. Happy Helming! NAME: my-awx-operator LAST DEPLOYED: Mon Apr 8 15:37:47 2024 NAMESPACE: awx STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: AWX Operator installed with Helm Chart version 2.14.0 $ minikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-demo-migration-24.1.0-qhbq2 0/1 Completed 0 12m awx-demo-postgres-15-0 1/1 Running 0 13m awx-demo-task-87756dfbc-chx9t 4/4 Running 0 12m awx-demo-web-69d6d5d6c-wdxlv 3/3 Running 0 12m awx-operator-controller-manager-8b7dfcb58-k7jt8 2/2 Running 0 17m $ # Default user is 'admin'. $ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode PoU9pFR2J5oFqymgX9I3I8swFgfZVkam $ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url) $ helm -n 'awx' uninstall 'my-awx-operator' $ minikube kubectl -- delete ns 'awx' ```

Run: kustomized helm chart > [!warning] > Remember to include the CRDs from the helm chart.
1. AMD64, OpenSUSE Leap 15.5, minikube ```sh $ minikube start --cpus=4 --memory=6g --addons=ingress 😄 minikube v1.29.0 on Opensuse-Leap 15.5 … 🌟 Enabled addons: storage-provisioner, default-storageclass, ingress 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' $ cat < 'namespace.yaml' --- apiVersion: v1 kind: Namespace metadata: name: awx EOF $ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx resources: - namespace.yaml helmCharts: - name: awx-operator repo: https://ansible.github.io/awx-operator/ version: 2.14.0 releaseName: awx-operator includeCRDs: true EOF $ minikube kubectl -- apply -f <(minikube kubectl -- kustomize --enable-helm) namespace/awx created customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created … deployment.apps/awx-operator-controller-manager created $ minikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-787d4945fb-fdffx 2/2 Running 0 3m36s $ cat < 'awx-demo.yaml' --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx-demo spec: service_type: nodeport EOF $ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml' $ minikube kubectl -- apply -f <(minikube kubectl -- kustomize --enable-helm) namespace/awx unchanged … deployment.apps/awx-operator-controller-manager unchanged awx.awx.ansible.com/awx-demo created $ minikube kubectl -- -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-demo-migration-24.1.0-zwv8w 0/1 Completed 0 115s awx-demo-postgres-15-0 1/1 Running 0 10m awx-demo-task-9c4655cb9-cmz87 4/4 Running 0 8m3s awx-demo-web-77f65cc65f-qhqrm 3/3 Running 0 8m4s awx-operator-controller-manager-787d4945fb-fdffx 2/2 Running 0 14m $ # Default user is 'admin'. $ minikube kubectl -- -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode DgHIaA9onZj106osEmvECigzsBqutHqI $ xdg-open $(minikube service -n 'awx' 'awx-demo-service' --url) $ minikube kubectl -- delete -f <(minikube kubectl -- kustomize --enable-helm) ```
1. AMD64, Mac OS X, EKS ```sh $ mkdir -p '/tmp/awx' $ cd '/tmp/awx' $ cat < 'namespace.yaml' --- apiVersion: v1 kind: Namespace metadata: name: awx EOF $ cat < 'kustomization.yaml' --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: awx resources: - namespace.yaml helmCharts: - name: awx-operator repo: https://ansible.github.io/awx-operator/ version: 2.19.1 releaseName: awx-operator includeCRDs: true EOF $ kubectl kustomize --enable-helm | kubectl apply -f - namespace/awx created … deployment.apps/awx-operator-controller-manager created $ kubectl get pods -n 'awx' NAME READY STATUS RESTARTS AGE awx-operator-controller-manager-3361cfab38-tdgt3 2/2 Running 0 13s $ cat < 'awx-demo.yaml' --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx-demo spec: admin_email: me@example.org no_log: false node_selector: | kubernetes.io/arch: amd64 service_type: LoadBalancer ingress_type: ingress ingress_annotations: | kubernetes.io/ingress.class: alb EOF $ yq -iy '.resources+=["awx-demo.yaml"]' 'kustomization.yaml' $ kubectl kustomize --enable-helm | kubectl apply -f - namespace/awx unchanged … deployment.apps/awx-operator-controller-manager unchanged awx.awx.ansible.com/awx-demo created $ kubectl -n 'awx' get pods NAME READY STATUS RESTARTS AGE awx-demo-migration-24.1.0-zwv8w 0/1 Completed 0 115s awx-demo-postgres-15-0 1/1 Running 0 10m awx-demo-task-8e34efc56-w5rc5 4/4 Running 0 8m3s awx-demo-web-545gbdgg7b-q2q4m 3/3 Running 0 8m4s awx-operator-controller-manager-3361cfab38-tdgt3 2/2 Running 0 14m $ # Default user is 'admin'. $ kubectl -n 'awx' get secret 'awx-demo-admin-password' -o jsonpath="{.data.password}" | base64 --decode IDwYOgL9k2ckaXmqMm6PT4d6TXdJcocd $ kubectl -n 'awx' get ingress 'awx-demo-ingress' -o jsonpath='{.status.loadBalancer.ingress[*].hostname}' \ | xargs -I{} open http://{} $ kubectl kustomize --enable-helm | kubectl delete -f - namespace "awx" deleted … awx.awx.ansible.com "awx-demo" deleted deployment.apps "awx-operator-controller-manager" deleted ```
### Executing jobs Unless explicitly defined in Job Templates, Schedules, or other resources that allow specifying the `instance_groups` key, Jobs using a containerized execution environment will execute in the _default_ container group. Normally, the _default_ container group does **not** limit where a Job's pod is executed, **nor** limits its assigned resources.
By explicitly configuring this container group, one can change the settings for Jobs that do not ask for custom executors.
E.g., one could set affinity and tolerations to assign Jobs to specific nodes by default, and set specific default resource limits.
```yaml # ansible playbook - name: Configure instance group 'default' tags: configure_instance_group_default_spot awx.awx.instance_group: name: default is_container_group: true pod_spec_override: |- apiVersion: v1 kind: Pod metadata: namespace: awx spec: serviceAccountName: default automountServiceAccountToken: false containers: - image: 012345678901.dkr.ecr.eu-west-1.amazonaws.com/infrastructure/awx-ee:latest name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi limits: cpu: 1830m memory: 1425Mi tolerations: - key: example.org/reservation.app operator: Equal value: awx effect: NoSchedule - key: awx.example.org/reservation.component operator: Equal value: job effect: NoSchedule affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.org/reservation.app operator: In values: - awx - key: awx.example.org/reservation.component operator: In values: - job preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: awx/component - key: eks.amazonaws.com/capacityType operator: In values: - SPOT - name: Configure instance group 'ondemand' tags: configure instance_group_ondemand awx.awx.instance_group: name: ondemand is_container_group: true pod_spec_override: |- apiVersion: v1 kind: Pod metadata: namespace: awx spec: serviceAccountName: default automountServiceAccountToken: false containers: - image: 012345678901.dkr.ecr.eu-west-1.amazonaws.com/infrastructure/awx-ee:latest name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi limits: cpu: 1830m memory: 1425Mi tolerations: - key: example.org/reservation.app operator: Equal value: awx effect: NoSchedule - key: awx.example.org/reservation.component operator: Equal value: job effect: NoSchedule affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.org/reservation.app operator: In values: - awx - key: awx.example.org/reservation.component operator: In values: - job - key: eks.amazonaws.com/capacityType operator: In values: - ON_DEMAND ```
## Attribute inheritance and overriding Some AWX-specific resources allow configuring similar attributes.
E.g., Schedules, Workflow Job Template Nodes and Job Templates all define `diff_mode`, `job_type`, and other properties. This is usually true for resource types that can reference another resource type.
It is meant to allow _parent_ resource to override properties of their _child_ through hierarchical inheritance.
E.g.: - Schedules and Workflow Job Template Nodes can both reference Job Templates via the `unified_job_template` key. - Schedules and Workflow Job Template Nodes specifying attributes like `diff_mode` or `job_type` will override the attribute of the same name specified by the Job Templates they reference.
Scheduled Jobs' attributes can override referenced Job Templates properties: ```yml - awx.awx.job_template: organization: ExampleOrg name: Some job … inventory: EC2 instances by Instance ID execution_environment: ExampleOrg-EE credentials: - SSM User # required to use SSM - AWX Central Key # required to 'become' in tasks project: Some project playbook: some_playbook.yml job_type: check verbosity: 3 diff_mode: true - awx.awx.schedule: organization: ExampleOrg unified_job_template: Some Job enabled: true … job_type: run # spawned jobs override the job template's "job_type" property verbosity: 0 # spawned jobs override the job template's "verbosity" property diff_mode: false # spawned jobs override the job template's "diff_mode" property ``` ```mermaid flowchart LR job_template("Job Template") schedule("Schedule") schedule --> job_template ```
This effect is applied recursively to the full reference chain.
Scheduled Jobs' attributes can override referenced Workflow Job Templates' properties, which are propagated to the Job Templates used by its Nodes: ```yml - awx.awx.job_template: organization: ExampleOrg name: Some job … inventory: EC2 instances by Instance ID execution_environment: ExampleOrg-EE credentials: - SSM User # required to use SSM - AWX Central Key # required to 'become' in tasks project: Some project playbook: some_playbook.yml job_type: check verbosity: 3 diff_mode: true - awx.awx.workflow_job_template: organization: ExampleOrg name: Some workflow … - awx.awx.workflow_job_template_node: workflow_job_template: Some workflow unified_job_template: Some job … job_type: check # spawned jobs override the job template's "job_type" property verbosity: 3 # spawned jobs override the job template's "verbosity" property diff_mode: true # spawned jobs override the job template's "diff_mode" property - awx.awx.schedule: organization: ExampleOrg unified_job_template: Some workflow enabled: true … job_type: run # spawned workflows override the node's "job_type" property verbosity: 0 # spawned workflows override the node's "verbosity" property diff_mode: false # spawned workflows override the node's "diff_mode" property ``` ```mermaid flowchart LR job_template("Job Template") workflow_job_template("Workflow Job Template") workflow_node("Workflow Node") schedule("Schedule") schedule --> workflow_job_template --> workflow_node --> job_template ```
### Variables inheritance and overriding Variables inheritance works in a similar fashion to [Attribute inheritance and overriding], but is specific to the `extra_vars` key (A.K.A. _prompts_ in the Web UI). Also see [Extra variables]. Variables defined in parent AWX resources recursively override those defined in children AWX resources **and**, by extension, Ansible resources (playbooks, blocks, tasks, etc).
Variables defined in ancestors **cannot** be overridden by any of the children in the chain, **nor are affected by any Ansible module or component** during playbook execution.
The result is effectively as if they were passed down with the `--extra-vars` CLI option. Refer [Ansible variables]. > [!warning] > Once a variable is defined in a Job Template or parent resources, it **will** be passed to the Ansible command during > Job execution, even if its value is set to `null` (it will just be an empty string).
> This also means the values configured in children resources can **at most** be overridden, but **never deleted**. This limitation is enforced to try and ensure predictable behavior, with higher-level configurations remaining consistent across the whole execution chain. > [!warning] > The AWX API has a specific restriction that does **not** consider nullified values for extra variables in resources > that allow their definition.
> If a resource does not specify a variable with a value, that variable should just **not** be provided in the payload. ## Elevating privileges in tasks AWX requires one to configure specific settings throughout its resources in order to be able to successfully use `become` and privileges-related keys in playbooks. 1. The playbook must be configured to elevate privileges as per normal Ansible operations.
```yml - name: Do something by escalating privileges hosts: all become: true tasks: [] ```
1. The Job Template referencing the playbook must have the _Privilege Escalation_ option **enabled**.
This corresponds to providing the `--become` flag when running the playbook.
1. The Credential used in the Job (either in the Job Template or whatever overrides them) must specify a user that is able to run `sudo` (or whatever `become_method` the playbook uses). > [!important] > Should the `become_method` require a password, one must also supply that password in the Credential. ## Workflow automation Refer [How to use workflow job templates in Ansible], [Workflow job templates] and [Workflows].
Also see [Passing Ansible variables in Workflows using set_stats]. _Workflow Job Templates_ coordinate the linking and execution of multiple resources by: - Synchronizing repositories with code for Projects. - Synchronize Inventories. - Having different Jobs run sequentially or in parallel. - Running Jobs based on the success or failure of one or more previous Jobs. - Requesting an admin's approval to proceed with one or more executions. Workflow Job Templates define their every action as a _Node_ resource.
Creation process ```mermaid flowchart LR job_template("Job Template") playbook("Playbook") project("Project") workflow_job_template("Workflow Job Template") workflow_node("Workflow Node") schedule("Schedule") playbook --> project --> job_template --> workflow_node --> workflow_job_template --> schedule ``` All the playbooks used in the workflow must be visible to AWX, meaning that one or more projects containing them must be already configured in the instance. Workflows need nodes to refer. Nodes reference a job template, which in turn refer a playbook to run. The AWX UI does not allow creating nodes directly, but it can be done via the visualizer. 1. Open _Resources_ > _Templates_ in the sidebar. 1. Click on the _Add_ button and choose _Add job template_ to add every job template that is needed.
Repeat as required. 1. Click on the _Add_ button and choose _Add workflow template_. 1. Fill in the form with the resources all nodes should share, and _save_.
The visualizer will open. 1. In the visualizer, create the needed nodes.
When creating nodes via the `awx.awx.workflow_job_template_node` module, nodes link by referencing the **next** step in the workflow.
As such, Nodes must be defined **last-to-first** in playbooks.
```yml - awx.awx.workflow_job_template_node: workflow_job_template: Some workflow identifier: Next action … - awx.awx.workflow_job_template_node: workflow_job_template: Some workflow identifier: Previous action … success_nodes: - Next action ```
When needing to use specific variables only in some Nodes within a workflow, consider: 1. Specifying those variable **only in the Nodes** that require them, and **not** at the Workflow Job Template's level.
This prevents them to the overridden by values set at the Workflow Job Template's level.
Workflow Job Template: ```diff -assume_role_arn: 'arn:aws:iam::012345678901:role/PowerfulRole' ``` Node: ```diff +assume_role_arn: 'arn:aws:iam::012345678901:role/PowerfulRole' rds_db_instance_identifier: some-db-to-create ```
1. Designing playbooks to handle variables' presence using Ansible's conditionals, defaults, and facts by using play- or task-specific variables, and populating them with the input from workflows.
```yml - hosts: all vars: playbook__assume_role_arn: "{{ role_arn }}" tasks: - vars: do_something__rds_instance_identifier: "{{ rds_instance_identifier }}" amazon.aws.rds_instance: … ``` Alternatively, enabling jinja evaluation at any level, then using jinja expressions to populate nodes' extra vars for differently named variables.
This is especially useful when [passing data between Nodes][pass data between workflow nodes].
Node's `extra_vars`: ```yml do_something__rds_instance_identifier: "{{ rds_instance_identifier }}" ```
1. Using separate Workflow Job Templates altogether, when fundamentally different variable sets are needed. ### Pass data between workflow Nodes Refer [Passing Ansible variables in Workflows using set_stats]. Leverage the [`set_stats` builtin module][ansible.builtin.set_stats module]. > [!important] > The artifact system requires Ansible >= v2.2.1.0-0.3.rc3 and the default `set_stats` parameters `per_host: false` > to work correctly with AWX. When using `set_stats` in a workflow, AWX saves the pairs configured in the module's `data` parameter as artifact.
The workflow system implements _cumulative_ artifact inheritance, where artifacts flow down through the workflow graph. Artifacts are available to **all** the nodes that are descendants of the one that created it, and **not** only to the node that immediately follows in the flow.
1. When any job uses `set_stats`, AWX stores pairs in the module's `data` parameter as artifacts.
**All** artifacts become available to **all descendant** nodes in the workflow. I.E: suppose having a workflow like follows: ```plaintext Node A* → Node B* → Node C ↘ Node D * = creates artifacts ``` Both `Node C` and `Node D` will receive the artifacts created by `Node A`, but only `Node C` will **also** receive any artifacts created by `Node B`. 1. Child nodes of the workflow receive the **cumulative** artifacts from **all** their ancestor nodes, with the specific rule that **children's artifacts overwrite parent's ones**. I.E.: suppose having a workflow which path is _Grandparent_ → _Parent_ → _Child_, where both Grandparent and Parent generate artifact.
Parent's artifacts **will overwrite any conflicting keys** from Grandparent when passed to the Child node.
> [!warning] > Artifacts are passed as `extra_vars` to subsequent nodes.
> This gives them higher precedence than the job template's default variables.
Example Considering a workflow where Node1 needs to pass data to Node2: 1. Playbook for Node1: ```yml --- - name: Get an AWS S3 object's information and pass them along hosts: [ … ] tasks: [ … ] post_tasks: - name: Pass the S3 object's information along when found tags: - always # important if one plans to test workflows by leveraging tags - pass_data_along when: s3_object_info is defined ansible.builtin.set_stats: data: s3_object_info: "{{ s3_object_info }}" ``` 1. Playbook for Node2: ```yml --- - name: Do something knowing an AWS S3 object exists because it got passed along hosts: [ … ] pre_tasks: - name: Ensure the S3 object exists beforehand and is in the STANDARD storage tier tags: - always # important if one plans to test workflows by leveraging tags - ensure_s3_object_is_usable ansible.builtin.assert: that: - s3_object_info.object_data.content_length | default(0) > 0 - s3_object_info.object_data.storage_class | default('') == 'STANDARD' tasks: [ … ] ```
## API Refer [AWX API Reference] and [How to use AWX REST API to execute jobs]. AWX offers the `awx` client CLI tool: ```sh # Install the 'awx' client # As of 2025-07-28, Python 3.11 is the last Python version for which the AWX CLI works correctly. pipx install --python '3.11' 'awxkit' pip3.11 install --user 'awxkit' ``` > [!tip] > Normally `awx` would require setting the configuration every command like so: > > ```sh > awx --conf.host https://awx.example.org --conf.username 'admin' --conf.password 'password' config > awx --conf.host https://awx.example.org --conf.username 'admin' --conf.password 'password' export --schedules > ``` > > Export settings to environment variables to avoid having to set them on the command line all the time: > > ```sh > export TOWER_HOST='https://awx.example.org' TOWER_USERNAME='admin' TOWER_PASSWORD='password' > ``` ```sh # Show the client's configuration awx config # List all available endpoints curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/' | jq '.' - # List instance groups awx instance_groups list # Show instance groups awx instance_groups get 'default' # List jobs awx jobs list awx jobs list -f 'yaml' awx jobs list -f 'human' --filter 'name,created,status' awx jobs list -f 'jq' --filter '.results[] | .name + " is " + .status' # Show job templates awx job_templates list curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/job_templates/' | jq '.' - awx job_templates get 'Some Job' # Show notification templates awx notification_templates list curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/notification_templates/' | jq '.' - # Show schedules awx schedules list awx schedules --schedules 'schedule-1' 'schedule-n' awx schedules get 'Some Schedule' curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/schedules/' | jq '.' - # Export data awx export awx export --job_templates 'job-template-1' 'job-template-n' --schedules curl -fs --user 'admin:password' 'https://awx.example.org/api/v2/export/' | jq '.' - ``` Refer [AWX Command Line Interface] for more information. ## Further readings - [Website] - [Ansible] - [Kubernetes] - [Minikube] - [Kustomize] - [Helm] - [AWX Command Line Interface] - [DeepWiki][deepwiki ansible/awx] ### Sources - [AWX's documentation] - [AWX's repository] - The [Operator's documentation] - The [Operator's repository] - [Basic install] - [arm64 image pulled shows amd64 as its arch] - [Helm install on existing cluster] - [Iterating on the installer without deploying the operator] - [Installer role's defaults] - [AWX API Reference] - [How to use AWX REST API to execute jobs] - [Automation Job isn't created with tolerations from AWX manifest] [Attribute inheritance and overriding]: #attribute-inheritance-and-overriding [Executing Jobs]: #executing-jobs [Gotchas]: #gotchas [Pass data between workflow Nodes]: #pass-data-between-workflow-nodes [Variables inheritance and overriding]: #variables-inheritance-and-overriding [Ansible]: ansible.md [Ansible variables]: ansible.md#variables [Helm]: kubernetes/helm.md [Kubernetes]: kubernetes/README.md [Kustomize]: kubernetes/kustomize.md [Minikube]: kubernetes/minikube.md [ansible.builtin.set_stats module]: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/set_stats_module.html [awx api reference]: https://ansible.readthedocs.io/projects/awx/en/latest/rest_api/ [awx command line interface]: https://docs.ansible.com/ansible-tower/latest/html/towercli/ [awx's documentation]: https://ansible.readthedocs.io/projects/awx/en/latest/ [awx's repository]: https://github.com/ansible/awx/ [basic install]: https://ansible.readthedocs.io/projects/awx-operator/en/latest/installation/basic-install.html [Extra variables]: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-job-templates#controller-extra-variables [helm install on existing cluster]: https://ansible.readthedocs.io/projects/awx-operator/en/latest/installation/helm-install-on-existing-cluster.html [How to use workflow job templates in Ansible]: https://www.redhat.com/en/blog/ansible-workflow-job-templates [installer role's defaults]: https://github.com/ansible/awx-operator/blob/devel/roles/installer/defaults/main.yml [iterating on the installer without deploying the operator]: https://ansible.readthedocs.io/projects/awx-operator/en/latest/troubleshooting/debugging.html#iterating-on-the-installer-without-deploying-the-operator [operator's documentation]: https://ansible.readthedocs.io/projects/awx-operator/en/latest/ [operator's repository]: https://github.com/ansible/awx-operator/ [Red Hat Ansible Automation Platform]: https://www.redhat.com/en/technologies/management/ansible [website]: https://www.ansible.com/awx/ [Workflow job templates]: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-workflow-job-templates [Workflows]: https://docs.ansible.com/automation-controller/4.4/html/userguide/workflows.html [arm64 image pulled shows amd64 as its arch]: https://github.com/brancz/kube-rbac-proxy/issues/79#issuecomment-826557647 [automation job isn't created with tolerations from awx manifest]: https://github.com/ansible/awx-operator/issues/1099#issuecomment-1298706083 [DeepWiki ansible/awx]: https://deepwiki.com/ansible/awx [how to use awx rest api to execute jobs]: https://www.dbi-services.com/blog/how-to-use-awx-rest-api-to-execute-jobs/ [Kubernetes operator]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/ [Passing Ansible variables in Workflows using set_stats]: https://gregsowell.com/?p=7540