Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide .
Tasks for services that don’t use a load balancer are considered healthy if they’re in the RUNNING
state. Tasks for services that use a load balancer are considered healthy if they’re in the RUNNING
state and the container instance that they’re hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide .
DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don’t meet the placement constraints. When using this strategy, you don’t need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide .
You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service uses the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they’re in the RUNNING
state, tasks for services that don’t use a load balancer are considered healthy . If they’re in the RUNNING
state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%.
If a service uses the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service uses either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state. This is while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren’t used. This is the case even if they’re currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren’t controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide .
When the service scheduler launches new tasks, it determines task placement in your cluster using the following logic:
Determine which of the container instances in your cluster can support the task definition of your service. For example, they have the required CPU, memory, ports, and container instance attributes.
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner. This is the case even if you can choose a different placement strategy with the placementStrategy
parameter.
Sort the valid container instances, giving priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone based on the previous steps, favoring container instances with the fewest number of running tasks for this service.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
create-service
[--cluster <value>]
--service-name <value>
[--task-definition <value>]
[--load-balancers <value>]
[--service-registries <value>]
[--desired-count <value>]
[--client-token <value>]
[--launch-type <value>]
[--capacity-provider-strategy <value>]
[--platform-version <value>]
[--role <value>]
[--deployment-configuration <value>]
[--placement-constraints <value>]
[--placement-strategy <value>]
[--network-configuration <value>]
[--health-check-grace-period-seconds <value>]
[--scheduling-strategy <value>]
[--deployment-controller <value>]
[--tags <value>]
[--enable-ecs-managed-tags | --no-enable-ecs-managed-tags]
[--propagate-tags <value>]
[--enable-execute-command | --disable-execute-command]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--cluster
(string)
The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
--service-name
(string)
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
--task-definition
(string)
The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn’t specified, the latestACTIVE
revision is used.A task definition must be specified if the service uses either the
ECS
orCODE_DEPLOY
deployment controllers.
--load-balancers
(list)
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide .
If the service uses the rolling update (
ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide .If the service uses the
CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an CodeDeploy deployment group, you specify two target groups (referred to as atargetGroupPair
). During a deployment, CodeDeploy determines which task set in your service has the statusPRIMARY
, and it associates one target group with it. Then, it also associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that you can use to perform validation tests with Lambda functions before routing production traffic to it.After you create a service using the
ECS
deployment controller, the load balancer name or target group ARN, container name, and container port that’s specified in the service definition are immutable. If you use theCODE_DEPLOY
deployment controller, these values can be changed when updating the service.For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name, and the container port to access from the load balancer. The container name must be as it appears in a container definition. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group that’s specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name , and the container port to access from the load balancer. The container name must be as it appears in a container definition. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer that’s specified here.
Services with tasks that use the
awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers aren’t supported. Also, when you create any target groups for these services, you must chooseip
as the target type, notinstance
. This is because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.(structure)
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
targetGroupArn -> (string)
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you’re using a Classic Load Balancer, omit the target group ARN.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering Multiple Target Groups with a Service in the Amazon Elastic Container Service Developer Guide .For services using the
CODE_DEPLOY
deployment controller, you’re required to define two target groups for the load balancer. For more information, see Blue/Green Deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .Warning
If your service’s task definition uses the
awsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.loadBalancerName -> (string)
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName -> (string)
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort -> (integer)
The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they’re launched on must allow ingress traffic on thehostPort
of the port mapping.
Shorthand Syntax:
targetGroupArn=string,loadBalancerName=string,containerName=string,containerPort=integer ...
JSON Syntax:
[
{
"targetGroupArn": "string",
"loadBalancerName": "string",
"containerName": "string",
"containerPort": integer
}
...
]
--service-registries
(list)
The details of the service discovery registry to associate with this service. For more information, see Service discovery .
Note
Each service may be associated with one service registry. Multiple service registries for each service isn’t supported.
(structure)
The details for the service registry.
registryArn -> (string)
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService .
port -> (integer)
The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used.containerName -> (string)
The container name value to be used for your service discovery service. It’s already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.containerPort -> (integer)
The port value to be used for your service discovery service. It’s already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.
Shorthand Syntax:
registryArn=string,port=integer,containerName=string,containerPort=integer ...
JSON Syntax:
[
{
"registryArn": "string",
"port": integer,
"containerName": "string",
"containerPort": integer
}
...
]
--desired-count
(integer)
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if
schedulingStrategy
isREPLICA
or isn’t specified. IfschedulingStrategy
isDAEMON
then this isn’t required.
--client-token
(string)
An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 32 ASCII characters are allowed.
--launch-type
(string)
The infrastructure that you run your service on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
The
FARGATE
launch type runs your tasks on Fargate On-Demand infrastructure.Note
Fargate Spot infrastructure is available for use but a capacity provider strategy must be used. For more information, see Fargate capacity providers in the Amazon ECS User Guide for Fargate .
The
EC2
launch type runs your tasks on Amazon EC2 instances registered to your cluster.The
EXTERNAL
launch type runs your tasks on your on-premises server or virtual machine (VM) capacity registered to your cluster.A service can use either a launch type or a capacity provider strategy. If a
launchType
is specified, thecapacityProviderStrategy
parameter must be omitted.Possible values:
EC2
FARGATE
EXTERNAL
--capacity-provider-strategy
(list)
The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used.A capacity provider strategy may contain a maximum of 6 capacity providers.
(structure)
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an
ACTIVE
orUPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the
FARGATE
orFARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider -> (string)
The short name of the capacity provider.
weight -> (integer)
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied.If no
weight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can’t be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail.An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of
1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB , then for every one task that’s run using capacityProviderA , four tasks would use capacityProviderB .base -> (integer)
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used.
Shorthand Syntax:
capacityProvider=string,weight=integer,base=integer ...
JSON Syntax:
[
{
"capacityProvider": "string",
"weight": integer,
"base": integer
}
...
]
--platform-version
(string)
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn’t specified, the
LATEST
platform version is used. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
--role
(string)
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition doesn’t use the
awsvpc
network mode. If you specify therole
parameter, you must also specify a load balancer object with theloadBalancers
parameter.Warning
If your account has already created the Amazon ECS service-linked role, that role is used for your service unless you specify a role here. The service-linked role is required if your task definition uses the
awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you don’t specify a role here. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide .If your specified role has a path other than
/
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the namebar
has a path of/foo/
then you would specify/foo/bar
as the role name. For more information, see Friendly names and paths in the IAM User Guide .
--deployment-configuration
(structure)
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker -> (structure)
Note
The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type.The deployment circuit breaker determines whether a service deployment will fail if the service can’t reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable -> (boolean)
Determines whether to enable the deployment circuit breaker logic for the service.
rollback -> (boolean)
Determines whether to enable Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent -> (integer)
If a service is using the rolling update (
ECS
) deployment type, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in theDRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.If a service is using the blue/green (
CODE_DEPLOY
) orEXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.minimumHealthyPercent -> (integer)
If a service is using the rolling update (
ECS
) deployment type, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in theRUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in theDRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they’re in theRUNNING
state; tasks for services that do use a load balancer are considered healthy if they’re in theRUNNING
state and they’re reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.If a service is using the blue/green (
CODE_DEPLOY
) orEXTERNAL
deployment types and tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
Shorthand Syntax:
deploymentCircuitBreaker={enable=boolean,rollback=boolean},maximumPercent=integer,minimumHealthyPercent=integer
JSON Syntax:
{
"deploymentCircuitBreaker": {
"enable": true|false,
"rollback": true|false
},
"maximumPercent": integer,
"minimumHealthyPercent": integer
}
--placement-constraints
(list)
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
(structure)
An object representing a constraint on task placement. For more information, see Task Placement Constraints in the Amazon Elastic Container Service Developer Guide .
Note
If you’re using the Fargate launch type, task placement constraints aren’t supported.
type -> (string)
The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates.expression -> (string)
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can’t specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .
Shorthand Syntax:
type=string,expression=string ...
JSON Syntax:
[
{
"type": "distinctInstance"|"memberOf",
"expression": "string"
}
...
]
--placement-strategy
(list)
The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
(structure)
The task placement strategy for a task or service. For more information, see Task Placement Strategies in the Amazon Elastic Container Service Developer Guide .
type -> (string)
The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that’s specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.field -> (string)
The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that’s applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
Shorthand Syntax:
type=string,field=string ...
JSON Syntax:
[
{
"type": "random"|"spread"|"binpack",
"field": "string"
}
...
]
--network-configuration
(structure)
The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it isn’t supported for other network modes. For more information, see Task networking in the Amazon Elastic Container Service Developer Guide .awsvpcConfiguration -> (structure)
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets -> (list)
The IDs of the subnets associated with the task or service. There’s a limit of 16 subnets that can be specified per
AwsVpcConfiguration
.Note
All specified subnets must be from the same VPC.
(string)
securityGroups -> (list)
The IDs of the security groups associated with the task or service. If you don’t specify a security group, the default security group for the VPC is used. There’s a limit of 5 security groups that can be specified per
AwsVpcConfiguration
.Note
All specified security groups must be from the same VPC.
(string)
assignPublicIp -> (string)
Whether the task’s elastic network interface receives a public IP address. The default value is
DISABLED
.
Shorthand Syntax:
awsvpcConfiguration={subnets=[string,string],securityGroups=[string,string],assignPublicIp=string}
JSON Syntax:
{
"awsvpcConfiguration": {
"subnets": ["string", ...],
"securityGroups": ["string", ...],
"assignPublicIp": "ENABLED"|"DISABLED"
}
}
--health-check-grace-period-seconds
(integer)
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don’t specify a health check grace period value, the default value of
0
is used.If your service’s tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
--scheduling-strategy
(string)
The scheduling strategy to use for the service. For more information, see Services .
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service uses theCODE_DEPLOY
orEXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that don’t meet the placement constraints. When you’re using this strategy, you don’t need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.Note
Tasks using the Fargate launch type or the
CODE_DEPLOY
orEXTERNAL
deployment controller types don’t support theDAEMON
scheduling strategy.Possible values:
REPLICA
DAEMON
--deployment-controller
(structure)
The deployment controller to use for the service. If no deployment controller is specified, the default value of
ECS
is used.type -> (string)
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update (
ECS
) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration .CODE_DEPLOY
The blue/green (
CODE_DEPLOY
) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.EXTERNAL
The external (
EXTERNAL
) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
Shorthand Syntax:
type=string
JSON Syntax:
{
"type": "ECS"|"CODE_DEPLOY"|"EXTERNAL"
}
--tags
(list)
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(structure)
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key -> (string)
One part of a key-value pair that make up a tag. A
key
is a general label that acts like a category for more specific tag values.value -> (string)
The optional part of a key-value pair that make up a tag. A
value
acts as a descriptor within a tag category (key).
Shorthand Syntax:
key=string,value=string ...
JSON Syntax:
[
{
"key": "string",
"value": "string"
}
...
]
--enable-ecs-managed-tags
| --no-enable-ecs-managed-tags
(boolean)
Specifies whether to enable Amazon ECS managed tags for the tasks within the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide .
--propagate-tags
(string)
Specifies whether to propagate the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags aren’t propagated. Tags can only be propagated to the tasks within the service during service creation. To add tags to a task after service creation or task creation, use the TagResource API action.
Possible values:
TASK_DEFINITION
SERVICE
--enable-execute-command
| --disable-execute-command
(boolean)
Determines whether the execute command functionality is enabled for the service. If
true
, this enables execute command functionality on all containers in the service tasks.
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.
Example 1: To create a service with a Fargate task
The following create-service
example shows how to create a service using a Fargate task.
aws ecs create-service \
--cluster MyCluster \
--service-name MyService \
--task-definition sample-fargate:1 \
--desired-count 2 \
--launch-type FARGATE \
--platform-version LATEST \
--network-configuration "awsvpcConfiguration={subnets=[subnet-12344321],securityGroups=[sg-12344321],assignPublicIp=ENABLED}" \
--tags key=key1,value=value1 key=key2,value=value2 key=key3,value=value3
Output:
{
"service": {
"serviceArn": "arn:aws:ecs:us-west-2:123456789012:service/MyCluster/MyService",
"serviceName": "MyService",
"clusterArn": "arn:aws:ecs:us-west-2:123456789012:cluster/MyCluster",
"loadBalancers": [],
"serviceRegistries": [],
"status": "ACTIVE",
"desiredCount": 2,
"runningCount": 0,
"pendingCount": 0,
"launchType": "FARGATE",
"platformVersion": "LATEST",
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/sample-fargate:1",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"id": "ecs-svc/1234567890123456789",
"status": "PRIMARY",
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/sample-fargate:1",
"desiredCount": 2,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1557119253.821,
"updatedAt": 1557119253.821,
"launchType": "FARGATE",
"platformVersion": "1.3.0",
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-12344321"
],
"securityGroups": [
"sg-12344321"
],
"assignPublicIp": "ENABLED"
}
}
}
],
"roleArn": "arn:aws:iam::123456789012:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS",
"events": [],
"createdAt": 1557119253.821,
"placementConstraints": [],
"placementStrategy": [],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-12344321"
],
"securityGroups": [
"sg-12344321"
],
"assignPublicIp": "ENABLED"
}
},
"schedulingStrategy": "REPLICA",
"tags": [
{
"key": "key1",
"value": "value1"
},
{
"key": "key2",
"value": "value2"
},
{
"key": "key3",
"value": "value3"
}
],
"enableECSManagedTags": false,
"propagateTags": "NONE"
}
}
Example 2: To create a service using the EC2 launch type
The following create-service
example shows how to create a service called ecs-simple-service
with a task that uses the EC2 launch type. The service uses the sleep360
task definition and it maintains 1 instantiation of the task.
aws ecs create-service \
--cluster MyCluster \
--service-name ecs-simple-service \
--task-definition sleep360:2 \
--desired-count 1
Output:
{
"service": {
"serviceArn": "arn:aws:ecs:us-west-2:123456789012:service/MyCluster/ecs-simple-service",
"serviceName": "ecs-simple-service",
"clusterArn": "arn:aws:ecs:us-west-2:123456789012:cluster/MyCluster",
"loadBalancers": [],
"serviceRegistries": [],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/sleep360:2",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"id": "ecs-svc/1234567890123456789",
"status": "PRIMARY",
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/sleep360:2",
"desiredCount": 1,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1557206498.798,
"updatedAt": 1557206498.798,
"launchType": "EC2"
}
],
"events": [],
"createdAt": 1557206498.798,
"placementConstraints": [],
"placementStrategy": [],
"schedulingStrategy": "REPLICA",
"enableECSManagedTags": false,
"propagateTags": "NONE"
}
}
Example 3: To create a service that uses an external deployment controller
The following create-service
example creates a service that uses an external deployment controller.
aws ecs create-service \
--cluster MyCluster \
--service-name MyService \
--deployment-controller type=EXTERNAL \
--desired-count 1
Output:
{
"service": {
"serviceArn": "arn:aws:ecs:us-west-2:123456789012:service/MyCluster/MyService",
"serviceName": "MyService",
"clusterArn": "arn:aws:ecs:us-west-2:123456789012:cluster/MyCluster",
"loadBalancers": [],
"serviceRegistries": [],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"taskSets": [],
"deployments": [],
"roleArn": "arn:aws:iam::123456789012:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS",
"events": [],
"createdAt": 1557128207.101,
"placementConstraints": [],
"placementStrategy": [],
"schedulingStrategy": "REPLICA",
"deploymentController": {
"type": "EXTERNAL"
},
"enableECSManagedTags": false,
"propagateTags": "NONE"
}
}
Example 4: To create a new service behind a load balancer
The following create-service
example shows how to create a service that is behind a load balancer. You must have a load balancer configured in the same Region as your container instance. This example uses the --cli-input-json
option and a JSON input file called ecs-simple-service-elb.json
with the following content:
{
"serviceName": "ecs-simple-service-elb",
"taskDefinition": "ecs-demo",
"loadBalancers": [
{
"loadBalancerName": "EC2Contai-EcsElast-123456789012",
"containerName": "simple-demo",
"containerPort": 80
}
],
"desiredCount": 10,
"role": "ecsServiceRole"
}
Command:
aws ecs create-service \
--cluster MyCluster \
--service-name ecs-simple-service-elb \
--cli-input-json file://ecs-simple-service-elb.json
Output:
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/ecs-demo:1",
"pendingCount": 0,
"loadBalancers": [
{
"containerName": "ecs-demo",
"containerPort": 80,
"loadBalancerName": "EC2Contai-EcsElast-123456789012"
}
],
"roleArn": "arn:aws:iam::123456789012:role/ecsServiceRole",
"desiredCount": 10,
"serviceName": "ecs-simple-service-elb",
"clusterArn": "arn:aws:ecs:<us-west-2:123456789012:cluster/MyCluster",
"serviceArn": "arn:aws:ecs:us-west-2:123456789012:service/ecs-simple-service-elb",
"deployments": [
{
"status": "PRIMARY",
"pendingCount": 0,
"createdAt": 1428100239.123,
"desiredCount": 10,
"taskDefinition": "arn:aws:ecs:us-west-2:123456789012:task-definition/ecs-demo:1",
"updatedAt": 1428100239.123,
"id": "ecs-svc/1234567890123456789",
"runningCount": 0
}
],
"events": [],
"runningCount": 0
}
}
For more information, see Creating a Service in the Amazon ECS Developer Guide.
service -> (structure)
The full description of your service following the create call.
A service will return either a
capacityProviderStrategy
orlaunchType
parameter, but not both, depending where one was specified when it was created.If a service is using the
ECS
deployment controller, thedeploymentController
andtaskSets
parameters will not be returned.if the service uses the
CODE_DEPLOY
deployment controller, thedeploymentController
,taskSets
anddeployments
parameters will be returned, however thedeployments
parameter will be an empty list.serviceArn -> (string)
The ARN that identifies the service. The ARN contains the
arn:aws:ecs
namespace, followed by the Region of the service, the Amazon Web Services account ID of the service owner, theservice
namespace, and then the service name. For example,arn:aws:ecs:region:012345678910:service/my-service
.serviceName -> (string)
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn -> (string)
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers -> (list)
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(structure)
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
targetGroupArn -> (string)
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you’re using a Classic Load Balancer, omit the target group ARN.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering Multiple Target Groups with a Service in the Amazon Elastic Container Service Developer Guide .For services using the
CODE_DEPLOY
deployment controller, you’re required to define two target groups for the load balancer. For more information, see Blue/Green Deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .Warning
If your service’s task definition uses the
awsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.loadBalancerName -> (string)
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName -> (string)
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort -> (integer)
The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they’re launched on must allow ingress traffic on thehostPort
of the port mapping.serviceRegistries -> (list)
The details for the service discovery registries to assign to this service. For more information, see Service Discovery .
(structure)
The details for the service registry.
registryArn -> (string)
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService .
port -> (integer)
The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used.containerName -> (string)
The container name value to be used for your service discovery service. It’s already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.containerPort -> (integer)
The port value to be used for your service discovery service. It’s already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.status -> (string)
The status of the service. The valid values are
ACTIVE
,DRAINING
, orINACTIVE
.desiredCount -> (integer)
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService .
runningCount -> (integer)
The number of tasks in the cluster that are in the
RUNNING
state.pendingCount -> (integer)
The number of tasks in the cluster that are in the
PENDING
state.launchType -> (string)
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy -> (list)
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(structure)
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an
ACTIVE
orUPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the
FARGATE
orFARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider -> (string)
The short name of the capacity provider.
weight -> (integer)
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied.If no
weight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can’t be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail.An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of
1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB , then for every one task that’s run using capacityProviderA , four tasks would use capacityProviderB .base -> (integer)
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used.platformVersion -> (string)
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn’t specified, the
LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .platformFamily -> (string)
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same
platformFamily
value as the service (for example,LINUX
).taskDefinition -> (string)
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService .
deploymentConfiguration -> (structure)
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker -> (structure)
Note
The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type.The deployment circuit breaker determines whether a service deployment will fail if the service can’t reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable -> (boolean)
Determines whether to enable the deployment circuit breaker logic for the service.
rollback -> (boolean)
Determines whether to enable Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent -> (integer)
If a service is using the rolling update (
ECS
) deployment type, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in theDRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.If a service is using the blue/green (
CODE_DEPLOY
) orEXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.minimumHealthyPercent -> (integer)
If a service is using the rolling update (
ECS
) deployment type, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in theRUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in theDRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they’re in theRUNNING
state; tasks for services that do use a load balancer are considered healthy if they’re in theRUNNING
state and they’re reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.If a service is using the blue/green (
CODE_DEPLOY
) orEXTERNAL
deployment types and tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.taskSets -> (list)
Information about a set of Amazon ECS tasks in either an CodeDeploy or an
EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.(structure)
Information about a set of Amazon ECS tasks in either an CodeDeploy or an
EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.id -> (string)
The ID of the task set.
taskSetArn -> (string)
The Amazon Resource Name (ARN) of the task set.
serviceArn -> (string)
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn -> (string)
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy -> (string)
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the
startedBy
parameter isCODE_DEPLOY
. If an external deployment created the task set, the startedBy field isn’t used.externalId -> (string)
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the
externalId
parameter contains the CodeDeploy deployment ID.If a task set is created for an external deployment and is associated with a service discovery registry, the
externalId
parameter contains theECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.status -> (string)
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn’t serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition -> (string)
The task definition that the task set is using.
computedDesiredCount -> (integer)
The computed desired count for the task set. This is calculated by multiplying the service’s
desiredCount
by the task set’sscale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.pendingCount -> (integer)
The number of tasks in the task set that are in the
PENDING
status during a deployment. A task in thePENDING
state is preparing to enter theRUNNING
state. A task set enters thePENDING
status when it launches for the first time or when it’s restarted after being in theSTOPPED
state.runningCount -> (integer)
The number of tasks in the task set that are in the
RUNNING
status during a deployment. A task in theRUNNING
state is running and ready for use.createdAt -> (timestamp)
The Unix timestamp for the time when the task set was created.
updatedAt -> (timestamp)
The Unix timestamp for the time when the task set was last updated.
launchType -> (string)
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy -> (list)
The capacity provider strategy that are associated with the task set.
(structure)
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an
ACTIVE
orUPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the
FARGATE
orFARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider -> (string)
The short name of the capacity provider.
weight -> (integer)
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied.If no
weight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can’t be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail.An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of
1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB , then for every one task that’s run using capacityProviderA , four tasks would use capacityProviderB .base -> (integer)
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used.platformVersion -> (string)
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily -> (string)
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration -> (structure)
The network configuration for the task set.
awsvpcConfiguration -> (structure)
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets -> (list)
The IDs of the subnets associated with the task or service. There’s a limit of 16 subnets that can be specified per
AwsVpcConfiguration
.Note
All specified subnets must be from the same VPC.
(string)
securityGroups -> (list)
The IDs of the security groups associated with the task or service. If you don’t specify a security group, the default security group for the VPC is used. There’s a limit of 5 security groups that can be specified per
AwsVpcConfiguration
.Note
All specified security groups must be from the same VPC.
(string)
assignPublicIp -> (string)
Whether the task’s elastic network interface receives a public IP address. The default value is
DISABLED
.loadBalancers -> (list)
Details on a load balancer that are used with a task set.
(structure)
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
targetGroupArn -> (string)
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you’re using a Classic Load Balancer, omit the target group ARN.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering Multiple Target Groups with a Service in the Amazon Elastic Container Service Developer Guide .For services using the
CODE_DEPLOY
deployment controller, you’re required to define two target groups for the load balancer. For more information, see Blue/Green Deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .Warning
If your service’s task definition uses the
awsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.loadBalancerName -> (string)
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName -> (string)
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort -> (integer)
The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they’re launched on must allow ingress traffic on thehostPort
of the port mapping.serviceRegistries -> (list)
The details for the service discovery registries to assign to this task set. For more information, see Service discovery .
(structure)
The details for the service registry.
registryArn -> (string)
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService .
port -> (integer)
The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used.containerName -> (string)
The container name value to be used for your service discovery service. It’s already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.containerPort -> (integer)
The port value to be used for your service discovery service. It’s already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can’t specify both.scale -> (structure)
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value -> (double)
The value, specified as a percent total of a service’s
desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.unit -> (string)
The unit of measure for the scale value.
stabilityStatus -> (string)
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set sre in
STEADY_STATE
:
The task
runningCount
is equal to thecomputedDesiredCount
.The
pendingCount
is0
.There are no tasks that are running on container instances in the
DRAINING
status.All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren’t met, the stability status returns
STABILIZING
.stabilityStatusAt -> (timestamp)
The Unix timestamp for the time when the task set stability status was retrieved.
tags -> (list)
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(structure)
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key -> (string)
One part of a key-value pair that make up a tag. A
key
is a general label that acts like a category for more specific tag values.value -> (string)
The optional part of a key-value pair that make up a tag. A
value
acts as a descriptor within a tag category (key).deployments -> (list)
The current state of deployments for the service.
(structure)
The details of an Amazon ECS service deployment. This is used only when a service uses the
ECS
deployment controller type.id -> (string)
The ID of the deployment.
status -> (string)
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new
PRIMARY
deployment.INACTIVE
A deployment that has been completely replaced.
taskDefinition -> (string)
The most recent task definition that was specified for the tasks in the service to use.
desiredCount -> (integer)
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount -> (integer)
The number of tasks in the deployment that are in the
PENDING
status.runningCount -> (integer)
The number of tasks in the deployment that are in the
RUNNING
status.failedTasks -> (integer)
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can’t launch the task, the task doesn’t transition to a
RUNNING
state, or if it fails any of its defined health checks and is stopped.Note
Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated.
createdAt -> (timestamp)
The Unix timestamp for the time when the service deployment was created.
updatedAt -> (timestamp)
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy -> (list)
The capacity provider strategy that the deployment is using.
(structure)
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an
ACTIVE
orUPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the
FARGATE
orFARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider -> (string)
The short name of the capacity provider.
weight -> (integer)
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied.If no
weight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can’t be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail.An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of
1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB , then for every one task that’s run using capacityProviderA , four tasks would use capacityProviderB .base -> (integer)
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used.launchType -> (string)
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide .
platformVersion -> (string)
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn’t specified, the
LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .platformFamily -> (string)
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same
platformFamily
value as the service, for example,LINUX.
.networkConfiguration -> (structure)
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the
awsvpc
networking mode.awsvpcConfiguration -> (structure)
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets -> (list)
The IDs of the subnets associated with the task or service. There’s a limit of 16 subnets that can be specified per
AwsVpcConfiguration
.Note
All specified subnets must be from the same VPC.
(string)
securityGroups -> (list)
The IDs of the security groups associated with the task or service. If you don’t specify a security group, the default security group for the VPC is used. There’s a limit of 5 security groups that can be specified per
AwsVpcConfiguration
.Note
All specified security groups must be from the same VPC.
(string)
assignPublicIp -> (string)
Whether the task’s elastic network interface receives a public IP address. The default value is
DISABLED
.rolloutState -> (string)
Note
The
rolloutState
of a service is only returned for services that use the rolling update (ECS
) deployment type that aren’t behind a Classic Load Balancer.The rollout state of the deployment. When a service deployment is started, it begins in an
IN_PROGRESS
state. When the service reaches a steady state, the deployment transitions to aCOMPLETED
state. If the service fails to reach a steady state and circuit breaker is enabled, the deployment transitions to aFAILED
state. A deployment inFAILED
state doesn’t launch any new tasks. For more information, see DeploymentCircuitBreaker .rolloutStateReason -> (string)
A description of the rollout state of a deployment.
roleArn -> (string)
The ARN of the IAM role that’s associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events -> (list)
The event stream for your service. A maximum of 100 of the latest events are displayed.
(structure)
The details for an event that’s associated with a service.
id -> (string)
The ID string for the event.
createdAt -> (timestamp)
The Unix timestamp for the time when the event was triggered.
message -> (string)
The event message.
createdAt -> (timestamp)
The Unix timestamp for the time when the service was created.
placementConstraints -> (list)
The placement constraints for the tasks in the service.
(structure)
An object representing a constraint on task placement. For more information, see Task Placement Constraints in the Amazon Elastic Container Service Developer Guide .
Note
If you’re using the Fargate launch type, task placement constraints aren’t supported.
type -> (string)
The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates.expression -> (string)
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can’t specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .placementStrategy -> (list)
The placement strategy that determines how tasks for the service are placed.
(structure)
The task placement strategy for a task or service. For more information, see Task Placement Strategies in the Amazon Elastic Container Service Developer Guide .
type -> (string)
The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that’s specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.field -> (string)
The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that’s applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.networkConfiguration -> (structure)
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the
awsvpc
networking mode.awsvpcConfiguration -> (structure)
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets -> (list)
The IDs of the subnets associated with the task or service. There’s a limit of 16 subnets that can be specified per
AwsVpcConfiguration
.Note
All specified subnets must be from the same VPC.
(string)
securityGroups -> (list)
The IDs of the security groups associated with the task or service. If you don’t specify a security group, the default security group for the VPC is used. There’s a limit of 5 security groups that can be specified per
AwsVpcConfiguration
.Note
All specified security groups must be from the same VPC.
(string)
assignPublicIp -> (string)
Whether the task’s elastic network interface receives a public IP address. The default value is
DISABLED
.healthCheckGracePeriodSeconds -> (integer)
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy -> (string)
The scheduling strategy to use for the service. For more information, see Services .
There are two service scheduler strategies available.
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance. This taskmeets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don’t meet the placement constraints.Note
Fargate tasks don’t support the
DAEMON
scheduling strategy.deploymentController -> (structure)
The deployment controller type the service is using. When using the DescribeServices API, this field is omitted if the service uses the
ECS
deployment controller type.type -> (string)
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update (
ECS
) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration .CODE_DEPLOY
The blue/green (
CODE_DEPLOY
) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.EXTERNAL
The external (
EXTERNAL
) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.tags -> (list)
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(structure)
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key -> (string)
One part of a key-value pair that make up a tag. A
key
is a general label that acts like a category for more specific tag values.value -> (string)
The optional part of a key-value pair that make up a tag. A
value
acts as a descriptor within a tag category (key).createdBy -> (string)
The principal that created the service.
enableECSManagedTags -> (boolean)
Determines whether to enable Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide .
propagateTags -> (string)
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren’t propagated.
enableExecuteCommand -> (boolean)
Determines whether the execute command functionality is enabled for the service. If
true
, the execute command functionality is enabled for all containers in tasks as part of the service.