Files
oam/knowledge base/cloud computing/aws/ecs.md

36 KiB

Elastic Container Service

  1. TL;DR
  2. How it works
    1. EC2 launch type
    2. Fargate launch type
    3. Standalone tasks
    4. Services
  3. Resource constraints
  4. Storage
    1. EBS volumes
    2. EFS volumes
    3. Docker volumes
    4. Bind mounts
  5. Execute commands in tasks' containers
  6. Allow tasks to communicate with each other
    1. ECS Service Connect
    2. ECS service discovery
    3. VPC Lattice
  7. Scrape metrics using Prometheus
  8. Troubleshooting
    1. Invalid 'cpu' setting for task
  9. Further readings
    1. Sources

TL;DR

Tasks are the basic unit of deployment.
They are instances of the set of containers specified in their own task definition.

Tasks model and run one or more containers, much like Pods in Kubernetes.
Containers cannot run on ECS unless encapsulated in a task.

Standalone tasks start a single task, which is meant to perform some work to completion and then stop (much like batch processes would).
Services run and maintain a defined number of instances of the same task simultaneously, which are meant to stay active and act as replicas of some service (much like web servers would).

Tasks are executed depending on their launch type and capacity providers:

  • On EC2 instances that one owns, manages, and pays for.
  • On Fargate (an AWS-managed serverless environment for containers execution).

Unless explicitly restricted or capped, containers in tasks get access to all the CPU and memory capacity available on the host running it.

By default, containers behave like other Linux processes with respect to access to resources like CPU and memory.
Unless explicitly protected and guaranteed, all containers running on the same host share CPU, memory, and other resources much like normal processes running on that host share those very same resources.

Usage
# List services.
aws ecs list-services --cluster 'clusterName'

# Scale services.
aws ecs update-service --cluster 'clusterName' --service 'serviceName' --desired-count '0'
aws ecs update-service --cluster 'clusterName' --service 'serviceName' --desired-count '10'

# Wait for services to be running.
aws ecs wait services-stable --cluster 'clusterName' --services 'serviceName'# Delete services.
# Cannot really be deleted if scaled above 0.
aws ecs delete-service --cluster 'clusterName' --service 'serviceName'
aws ecs delete-service --cluster 'clusterName' --service 'serviceName' --force

# List task definitions.
aws ecs list-task-definitions --family-prefix 'familyPrefix'

# Deregister task definitions.
aws ecs deregister-task-definition --task-definition 'taskDefinitionArn'

# Delete task definitions.
# The task definition must be deregistered.
aws ecs delete-task-definitions --task-definitions 'taskDefinitionArn'# List tasks.
aws ecs list-tasks --cluster 'clusterName'
aws ecs list-tasks --cluster 'clusterName' --service-name 'serviceName'

# Get information about tasks.
aws ecs describe-tasks --cluster 'clusterName' --tasks 'taskIdOrArn'# Wait for tasks to be running.
aws ecs wait tasks-running --cluster 'clusterName' --tasks 'taskIdOrArn'# Access shells on containers in ECS.
aws ecs execute-command \
  --cluster 'clusterName' --task 'taskId' --container 'containerName' \
  --interactive --command '/bin/bash'
Real world use cases
# Get the ARNs of tasks for specific services.
aws ecs list-tasks --cluster 'testCluster' --service-name 'testService' --query 'taskArns' --output 'text'

# Get the private IP Address of containers.
aws ecs describe-tasks --output 'text' \
  --cluster 'testCluster' --tasks 'testTask' \
  --query "tasks[].attachments[].details[?(name=='privateDnsName')].value"

# Connect to the private DNS name of containers in ECS.
curl -fs "http://$(\
  aws ecs describe-tasks --cluster 'testCluster' --tasks "$(\
      aws ecs list-tasks --cluster 'testCluster' --service-name 'testService' --query 'taskArns' --output 'text' \
  )" --query "tasks[].attachments[].details[?(name=='privateDnsName')].value" --output 'text' \
):8080"

# Delete services.
aws ecs delete-service --cluster 'testCluster' --service 'testService' --force

# Delete task definitions.
aws ecs list-task-definitions --family-prefix 'testService' --output 'text' --query 'taskDefinitionArns' \
| xargs -n '1' aws ecs deregister-task-definition --task-definition

# Wait for tasks to be running.
aws ecs list-tasks --cluster 'testCluster' --family 'testService' --output 'text' --query 'taskArns' \
| xargs -p aws ecs wait tasks-running --cluster 'testCluster' --tasks
while [[ $(aws ecs list-tasks --query 'taskArns' --output 'text' --cluster 'testCluster' --service-name 'testService') == "" ]]; do sleep 1; done

How it works

Tasks must be registered in task definitions before they can be launched.

Tasks can be executed as Standalone tasks or services.
Whatever the launch type:

  1. On launch, a task is created and moved to the PROVISIONING state.
    While in this state, ECS needs to find compute capacity for the task and neither the task nor its containers exist.

  2. ECS selects the appropriate compute capacity for the task based on its launch type or capacity provider configuration.

    Tasks will fail immediately should there be not enough compute capacity for the task in the launch type or capacity provider.

    When using a capacity provider with managed scaling enabled, tasks that can't be started due to a lack of compute capacity are kept in the PROVISIONING state while ECS provisions the necessary attachments.

  3. ECS uses the container agent to pull the task's container images.

  4. ECS starts the task's containers.

  5. ECS moves the task into the RUNNING state.

EC2 launch type

Starts tasks onto registered EC2 instances.

Instances can be registered:

  • Manually.
  • Automatically, by using the cluster auto scaling feature to dynamically scale the cluster's compute capacity.

Fargate launch type

Starts tasks on dedicated, managed EC2 instances that are not reachable by the users.

Instances are automatically provisioned, configured, and registered to scale one's cluster capacity.
The service takes care itself of all the infrastructure management for the tasks.

Standalone tasks

Refer Amazon ECS standalone tasks.

Meant to perform some work, then stop similarly to batch processes.

Can be executed on schedules using the EventBridge Scheduler.

Services

Refer Amazon ECS services.

Execute and maintain a defined number of instances of the same task simultaneously in a cluster.

Tasks executed in services are meant to stay active until decommissioned, much like web services.
Should any of such tasks fail or stops, the service scheduler will launch another instance of the same task to replace the one that failed.

One can optionally expose services behind a load balancer to distribute traffic across the tasks that the service manages.

The service scheduler will replace unhealthy tasks should a container health check or a load balancer target group health check fail.
This depends on the maximumPercent and desiredCount parameters in the service's definition.

If a task is marked unhealthy, the service scheduler will first start a replacement task. Then:

  • If the replacement task is HEALTHY, the service scheduler stops the unhealthy task.
  • If the replacement task is also UNHEALTHY, the scheduler will stop either the unhealthy replacement task or the existing unhealthy task to get the total task count equal to the desiredCount value.

Should the maximumPercent parameter limit the scheduler from starting a replacement task first, the scheduler will:

  • Stop unhealthy tasks one at a time at random in order to free up capacity.
  • Start a replacement task.

The start and stop process continues until all unhealthy tasks are replaced with healthy tasks.
Should the total task count still exceed desiredCount once all unhealthy tasks have been replaced and only healthy tasks are running, healthy tasks are stopped at random until the total task count equals desiredCount.

The service scheduler includes logic that throttles how often tasks are restarted if they repeatedly fail to start.
If a task is stopped without having entered the RUNNING state, the service scheduler starts to slow down the launch attempts and sends out a service event message.
This prevents unnecessary resources from being used for failed tasks before one can resolve the issue.
On service update, the service scheduler resumes normal scheduling behavior.

Available service scheduler strategies:

  • REPLICA: places and maintains the desired number of tasks across one's cluster.
    By default, tasks are spread across Availability Zones. Use task placement strategies and constraints to customize task placement decisions.

  • DAEMON: deploys exactly one task on each active container instance meeting all of the task placement constraints for the task.
    There is no need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies when using this strategy.

    Fargate does not support the DAEMON scheduling strategy.

Resource constraints

ECS uses the CPU period and the CPU quota to control the task's CPU hard limits as a whole.
When specifying CPU values in task definitions, ECS translates that value to the CPU period and CPU quota settings that apply to the cgroup running all the containers in the task.

The CPU quota controls the amount of CPU time granted to a cgroup during a given CPU period. Both settings are expressed in terms of microseconds.
When the CPU quota equals the CPU period, a cgroup can execute up to 100% on one vCPU (or any other fraction that totals to 100% for multiple vCPUs). The CPU quota has a maximum of 1000000us, and the CPU period has a minimum of 1ms. Use these values to set the limits for the tasks' CPU count.

When changing the CPU period without changing the CPU quota, the task will have different effective limits than what is specified in the task definition.

The 100ms period allows for vCPUs ranging from 0.125 to 10.

Task-level CPU and memory parameters are ignored for Windows containers.

The cpu value must be expressed in CPU units or vCPUs.
vCPUs are converted to CPU units when task definitions are registered.

The memory value can be expressed in MiB or GB.
_GB_s are converted to MiB when tasks definitions are registered.

These fields are optional for tasks hosted on EC2.
Such tasks support CPU values between 0.25 and 10 vCPUs. these fields are optional

Task definitions specifying FARGATE as value for the requiresCompatibilities attribute, even if they also specify the EC2 value, are required to set both settings and to set them to one of the couples specified in the table.
Fargate task definitions support only those specific values for tasks' CPU and memory.

CPU units vCPUs Memory values Supported OSes Notes
256 .25 512 MiB, 1 GB, or 2 GB Linux
512 .5 Between 1 GB and 4 GB in 1 GB increments Linux
1024 1 Between 2 GB and 8 GB in 1 GB increments Linux, Windows
2048 2 Between 4 GB and 16 GB in 1 GB increments Linux, Windows
4096 4 Between 8 GB and 30 GB in 1 GB increments Linux, Windows
8192 8 Between 16 GB and 60 GB in 4 GB increments Linux Requires Linux platform >= 1.4.0
16384 16 Between 32 GB and 120 GB in 8 GB increments Linux Requires Linux platform >= 1.4.0

The task's settings are separate from the CPU and memory values that can be defined at the container definition level.
Should both a container-level memory and memoryReservation value be set, the memory value must be higher than the memoryReservation value.
If specifying memoryReservation, that value is guaranteed to the container and subtracted from the available memory resources for the container instance that the container is placed on. Otherwise, the value of memory is used.

Storage

Refer Storage options for Amazon ECS tasks.

Volume type Launch type support OS support Persistence Use cases
EBS volumes EC2
Fargate
Linux Can be persisted when used by a standalone task
Ephemeral when attached to tasks maintained by a service
Transactional workloads
EFS volumes EC2
Fargate
Linux Persistent Data analytics
Media processing
Content management
Web serving
Docker volumes EC2 Linux, Windows Persistent Provide a location for data persistence
Sharing data between containers
Bind mounts EC2
Fargate
Linux, Windows Ephemeral Data analytics
Media processing
Content management
Web serving

EBS volumes

Refer Use Amazon EBS volumes with Amazon ECS.

One can attach at most one EBS volume to each ECS task, and it must be a new volume.
One cannot attach existing EBS volume to tasks. However, one can configure a new EBS volume at deployment to use the snapshot of an existing volume as starting point.

Provisioning volumes from snapshots of EBS volumes that contains partitions is not supported.

EBS volumes can be configured at deployment only for services that use the rolling update deployment type and the Replica scheduling strategy.

Containers in a task will be able to write to the mounted EBS volume only if the container runs as the root user.

ECS automatically adds the AmazonECSCreated and AmazonECSManaged reserved tags to attached volumes.
Should one remove these tags from the volumes, ECS won't be able to manage it anymore.

Volumes attached to tasks which are managed by a service are not preserved, and are always deleted upon task's termination.

One cannot configure EBS volumes for attachment to ECS tasks running on AWS Outposts.

EFS volumes

Refer Use Amazon EFS volumes with Amazon ECS.

Allows tasks with access to the same EFS volumes to share persistent storage.

Tasks must:

  • Reference the EFS volumes in the volumes attribute of their definition.
  • Reference the defined volumes in the mountPoints attribute in the containers' specifications.
{
    "volumes": [{
        "name": "myEfsVolume",
        "efsVolumeConfiguration": {
            "fileSystemId": "fs-1234",
            "rootDirectory": "/path/to/my/data",
            "transitEncryption": "ENABLED",
            "transitEncryptionPort": integer,
            "authorizationConfig": {
                "accessPointId": "fsap-1234",
                "iam": "ENABLED"
            }
        }
    }],
    "containerDefinitions": [{
        "name": "container-using-efs",
        "image": "amazonlinux:2",
        "entryPoint": [
            "sh",
            "-c"
        ],
        "command": [ "ls -la /mount/efs" ],
        "mountPoints": [{
            "sourceVolume": "myEfsVolume",
            "containerPath": "/mount/efs",
            "readOnly": true
        }]
    }]
}

EFS file systems are supported on

  • EC2 nodes using ECS-optimized AMI version 20200319 with container agent version 1.38.0.
  • Fargate since platform version 1.4.0 or later (Linux).

Not supported on external instances.

Docker volumes

Refer Use Docker volumes with Amazon ECS.

TODO

Bind mounts

Refer Use bind mounts with Amazon ECS.

TODO

Execute commands in tasks' containers

Refer Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2, A Step-by-Step Guide to Enabling Amazon ECS Exec, aws ecs execute-command results in TargetNotConnectedException The execute command failed due to an internal error and Amazon ECS Exec Checker.

Leverage ECS Exec, which in turn leverages SSM to create a secure channel between one's device and the target container. It does so by bind-mounting the necessary SSM agent binaries into the container while the ECS (or Fargate) agent starts the SSM core agent inside the container.
The agent, when invoked, calls SSM to create the secure channel. In order to do so, the container's ECS task must have the proper IAM privileges for the SSM core agent to call the SSM service.

The SSM agent does not run as a separate container sidecar, but as an additional process inside the application container.
Refer ECS Execute-Command proposal for details.

Whe whole procedure is transparent and does not compel requirements changes in the container's content.

Requirements:

  • The required SSM components must be available on the EC2 instances hosting the container. Amazon's ECS optimized AMI and Fargate 1.4 include their latest version already.

  • The container's image must have script and cat installed.
    Required in order to have command logs uploaded correctly to S3 and/or CloudWatch.

  • The task's role (not the Task's execution role) must have specific permissions assigned.

    Policy example
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "RequiredSSMPermissions",
                "Effect": "Allow",
                "Action": [
                    "ssmmessages:CreateControlChannel",
                    "ssmmessages:CreateDataChannel",
                    "ssmmessages:OpenControlChannel",
                    "ssmmessages:OpenDataChannel"
                ],
                "Resource": "*"
            },
            {
                "Sid": "RequiredGlobalCloudWatchPermissions",
                "Effect": "Allow",
                "Action": "logs:DescribeLogGroups",
                "Resource": "*"
            },
            {
                "Sid": "RequiredSpecificCloudWatchPermissions",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:DescribeLogStreams",
                    "logs:PutLogEvents"
                ],
                "Resource": "arn:aws:logs:eu-west-1:012345678901:log-group:/ecs/log-group-name:*"
            },
            {
                "Sid": "OptionalGlobalS3Permissions",
                "Effect": "Allow",
                "Action": "s3:GetEncryptionConfiguration",
                "Resource": "arn:aws:s3:::ecs-exec-bucket"
            },
            {
                "Sid": "OptionalSpecificS3Permissions",
                "Effect": "Allow",
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::ecs-exec-bucket/*"
            },
            {
                "Sid": "OptionalKMSPermissions",
                "Effect": "Allow",
                "Action": "kms:Decrypt",
                "Resource": "arn:aws:kms:eu-west-1:012345678901:key/abcdef01-2345-6789-abcd-ef0123456789"
            }
        ]
    }
    
  • The service or the run-task command that start the task must have the enable-execute-command set to true.

    Examples
    aws ecs run-task … --enable-execute-command
    aws ecs update-service --cluster 'stg' --service 'grafana' --enable-execute-command --force-new-deployment
    
  • Users initiating the execution:

    • Must install the Session Manager plugin for the AWS CLI.

    • Must be allowed the ecs:ExecuteCommand action on the ECS cluster.

      Policy example
      {
          "Version": "2012-10-17",
          "Statement": [{
              "Effect": "Allow",
              "Action": "ecs:ExecuteCommand",
              "Resource": "arn:aws:ecs:eu-west-1:012345678901:cluster/staging",
              "Condition": {
                  "StringEquals": {
                      "aws:ResourceTag/application": "appName",
                      "StringEquals": {
                          "ecs:container-name": "nginx"
                      }
                  }
              },
          }]
      }
      

Procedure:

  1. Confirm that the task's ExecuteCommandAgent status is RUNNING and the enableExecuteCommand attribute is set to true.

    Example
    aws ecs describe-tasks --cluster 'staging' --tasks 'ef6260ed8aab49cf926667ab0c52c313' --output 'yaml' \
    --query 'tasks[0] | {
        "managedAgents": containers[].managedAgents[?@.name==`ExecuteCommandAgent`][],
        "enableExecuteCommand": enableExecuteCommand
      }'
    
    enableExecuteCommand: true
    managedAgents:
    - lastStartedAt: '2025-01-28T22:16:59.370000+01:00'
      lastStatus: RUNNING
      name: ExecuteCommandAgent
    
  2. Execute the command.

    Example
    aws ecs execute-command --interactive --command 'df -h' \
      --cluster 'staging' --task 'ef6260ed8aab49cf926667ab0c52c313' --container 'nginx'
    
    The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
    
    
    Starting session with SessionId: ecs-execute-command-zobkrf3qrif9j962h9pecgnae8
    Filesystem      Size  Used Avail Use% Mounted on
    overlay          31G   12G   18G  40% /
    tmpfs            64M     0   64M   0% /dev
    shm             464M     0  464M   0% /dev/shm
    tmpfs           464M     0  464M   0% /sys/fs/cgroup
    /dev/nvme1n1     31G   12G   18G  40% /etc/hosts
    /dev/nvme0n1p1  4.9G  2.1G  2.8G  43% /managed-agents/execute-command
    tmpfs           464M     0  464M   0% /proc/acpi
    tmpfs           464M     0  464M   0% /sys/firmware
    
    
    Exiting session with sessionId: ecs-execute-command-zobkrf3qrif9j962h9pecgnae8.
    

Should one's command invoke a shell, one will gain interactive access to the container.
In this case, all commands and their outputs inside the shell session will be logged to S3 and/or CloudWatch. The shell invocation command and the user that invoked it will be logged in CloudTrail for auditing purposes as part of the ECS ExecuteCommand API call.

Should one's command invoke a single command, only the output of the command will be logged to S3 and/or CloudWatch. The command itself will still be logged in CloudTrail as part of the ECS ExecuteCommand API call.

Logging options are configured at the ECS cluster level.
The task's role will need to have IAM permissions to log the output to S3 and/or CloudWatch should the cluster be configured for the above options. If the options are not configured, then the permissions are not required.

Allow tasks to communicate with each other

Refer How can I allow the tasks in my Amazon ECS services to communicate with each other? and Interconnect Amazon ECS services.

Tasks in a cluster are not normally able to communicate with each other.
Use ECS Service Connect, ECS service discovery or VPC Lattice to allow that.

ECS Service Connect

ECS Service Connect provides ECS clusters with the configuration they need for service discovery, connectivity, and traffic monitoring.

Applications can use short names and standard ports to connect to services in the same or other clusters.
This includes connecting across VPCs in the same AWS Region.

When using Service Connect, ECS dynamically manages DNS entries for each task as they start and stop.
It does so by running an agent in each task that is configured to discover the names.

One must provide the complete configuration inside each service and task definition.
ECS manages changes to this configuration in each service's deployment and ensures that all tasks in a deployment behave in the same way.

Service Connect is not compatible with ECS' host network mode.

See also Use Service Connect to connect Amazon ECS services with short names.

ECS service discovery

Service discovery helps manage HTTP and DNS namespaces for ECS services.

ECS syncs the list of launched tasks to AWS Cloud Map.
Cloud Map maintains DNS records that resolve to the internal IP addresses of one or more tasks from registered services.
Other services in the same VPC can use such DNS records to send traffic directly to containers using their internal IP addresses.

This approach provides low latency since traffic travels directly between the containers.

ECS service discovery is a good fit when using the awsvpc network mode, where:

  • Each task is assigned its own, unique IP address.
  • That IP address is an A record.
  • Each service can have a unique security group assigned.

When using bridged network mode, A records are no longer enough for service discovery and one must also use a SRV DNS record. This is due to containers sharing the same IP address and having ports mapped randomly.
SRV records can keep track of both IP addresses and port numbers, but requires applications to be appropriately configured.

Service discovery supports only the A and SRV DNS record types.
DNS records are automatically added or removed as tasks start or stop for ECS services.

DNS records have a TTL and it might happen that tasks died before this ended.
One must implement extra logic in one's applications, so that they can handle retries and deal with connection failures when the records are not yet updated.

See also Use service discovery to connect Amazon ECS services with DNS names.

VPC Lattice

Managed application networking service that customers can use to observe, secure, and monitor applications built across AWS compute services, VPCs, and accounts without having to modify their code.

VPC Lattice technically replaces the need for Application Load Balancers by leveraging target groups themselves.
Target groups which are a collection of compute resources, and can refer EC2 instances, IP addresses, Lambda functions, and Application Load Balancers.
Listeners are used to forward traffic to specified target groups when the conditions are met.
ECS also automatically replaces unhealthy tasks.

ECS tasks can be enabled as IP targets in VPC Lattice by associating their services with a VPC Lattice target group.
ECS automatically registers tasks to the VPC Lattice target group when they are launched for registered services.

Deployments might take longer when using VPC Lattice due to the extent of changes required.

See also What is Amazon VPC Lattice? and its Amazon VPC Lattice pricing.

Scrape metrics using Prometheus

Refer Prometheus service discovery for AWS ECS and Scraping Prometheus metrics from applications running in AWS ECS.

Prometheus is not currently capable to automatically discover ECS components like services or tasks.

Solutions:

Troubleshooting

Invalid 'cpu' setting for task

Refer Troubleshoot Amazon ECS task definition invalid CPU or memory errors and Resource constraints.

Cause

One specified an invalid cpu or memory value for the task when registering a task definition using ECS's API or the AWS CLI.

Should the task definition specify FARGATE as value for the requiresCompatibilities attribute, the resource values must be one of the specific pairs supported by Fargate.

Solution

Specify a supported value for the task CPU and memory in your task definition.

Further readings

Sources