[ aws . application-autoscaling ]

put-scaling-policy

Description

Creates or updates a scaling policy for an Application Auto Scaling scalable target.

Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target.

Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30.

We recommend caution, however, when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior. For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked. After the scale-in activity completes, the target tracking policy could instruct the scalable target to scale out again.

For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .

Note

If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted.

See also: AWS API Documentation

Synopsis

  put-scaling-policy
--policy-name <value>
--service-namespace <value>
--resource-id <value>
--scalable-dimension <value>
[--policy-type <value>]
[--step-scaling-policy-configuration <value>]
[--target-tracking-scaling-policy-configuration <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
[--cli-binary-format <value>]
[--no-cli-pager]
[--cli-auto-prompt]
[--no-cli-auto-prompt]

Options

--policy-name (string)

The name of the scaling policy.

You cannot change the name of a scaling policy, but you can delete the original scaling policy and create a new scaling policy with the same settings and a different name.

--service-namespace (string)

The namespace of the Amazon Web Services service that provides the resource. For a resource provided by your own application or service, use custom-resource instead.

Possible values:

  • ecs

  • elasticmapreduce

  • ec2

  • appstream

  • dynamodb

  • rds

  • sagemaker

  • custom-resource

  • comprehend

  • lambda

  • cassandra

  • kafka

  • elasticache

  • neptune

--resource-id (string)

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

  • ECS service - The resource type is service and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp .

  • Spot Fleet - The resource type is spot-fleet-request and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE .

  • EMR cluster - The resource type is instancegroup and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0 .

  • AppStream 2.0 fleet - The resource type is fleet and the unique identifier is the fleet name. Example: fleet/sample-fleet .

  • DynamoDB table - The resource type is table and the unique identifier is the table name. Example: table/my-table .

  • DynamoDB global secondary index - The resource type is index and the unique identifier is the index name. Example: table/my-table/index/my-table-index .

  • Aurora DB cluster - The resource type is cluster and the unique identifier is the cluster name. Example: cluster:my-db-cluster .

  • SageMaker endpoint variant - The resource type is variant and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering .

  • Custom resources are not supported with a resource type. This parameter must specify the OutputValue from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository .

  • Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE .

  • Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE .

  • Lambda provisioned concurrency - The resource type is function and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST . Example: function:my-function:prod or function:my-function:1 .

  • Amazon Keyspaces table - The resource type is table and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable .

  • Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5 .

  • Amazon ElastiCache replication group - The resource type is replication-group and the unique identifier is the replication group name. Example: replication-group/mycluster .

  • Neptune cluster - The resource type is cluster and the unique identifier is the cluster name. Example: cluster:mycluster .

--scalable-dimension (string)

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

  • ecs:service:DesiredCount - The desired task count of an ECS service.

  • elasticmapreduce:instancegroup:InstanceCount - The instance count of an EMR Instance Group.

  • ec2:spot-fleet-request:TargetCapacity - The target capacity of a Spot Fleet.

  • appstream:fleet:DesiredCapacity - The desired capacity of an AppStream 2.0 fleet.

  • dynamodb:table:ReadCapacityUnits - The provisioned read capacity for a DynamoDB table.

  • dynamodb:table:WriteCapacityUnits - The provisioned write capacity for a DynamoDB table.

  • dynamodb:index:ReadCapacityUnits - The provisioned read capacity for a DynamoDB global secondary index.

  • dynamodb:index:WriteCapacityUnits - The provisioned write capacity for a DynamoDB global secondary index.

  • rds:cluster:ReadReplicaCount - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.

  • sagemaker:variant:DesiredInstanceCount - The number of EC2 instances for a SageMaker model endpoint variant.

  • custom-resource:ResourceType:Property - The scalable dimension for a custom resource provided by your own application or service.

  • comprehend:document-classifier-endpoint:DesiredInferenceUnits - The number of inference units for an Amazon Comprehend document classification endpoint.

  • comprehend:entity-recognizer-endpoint:DesiredInferenceUnits - The number of inference units for an Amazon Comprehend entity recognizer endpoint.

  • lambda:function:ProvisionedConcurrency - The provisioned concurrency for a Lambda function.

  • cassandra:table:ReadCapacityUnits - The provisioned read capacity for an Amazon Keyspaces table.

  • cassandra:table:WriteCapacityUnits - The provisioned write capacity for an Amazon Keyspaces table.

  • kafka:broker-storage:VolumeSize - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.

  • elasticache:replication-group:NodeGroups - The number of node groups for an Amazon ElastiCache replication group.

  • elasticache:replication-group:Replicas - The number of replicas per node group for an Amazon ElastiCache replication group.

  • neptune:cluster:ReadReplicaCount - The count of read replicas in an Amazon Neptune DB cluster.

Possible values:

  • ecs:service:DesiredCount

  • ec2:spot-fleet-request:TargetCapacity

  • elasticmapreduce:instancegroup:InstanceCount

  • appstream:fleet:DesiredCapacity

  • dynamodb:table:ReadCapacityUnits

  • dynamodb:table:WriteCapacityUnits

  • dynamodb:index:ReadCapacityUnits

  • dynamodb:index:WriteCapacityUnits

  • rds:cluster:ReadReplicaCount

  • sagemaker:variant:DesiredInstanceCount

  • custom-resource:ResourceType:Property

  • comprehend:document-classifier-endpoint:DesiredInferenceUnits

  • comprehend:entity-recognizer-endpoint:DesiredInferenceUnits

  • lambda:function:ProvisionedConcurrency

  • cassandra:table:ReadCapacityUnits

  • cassandra:table:WriteCapacityUnits

  • kafka:broker-storage:VolumeSize

  • elasticache:replication-group:NodeGroups

  • elasticache:replication-group:Replicas

  • neptune:cluster:ReadReplicaCount

--policy-type (string)

The scaling policy type. This parameter is required if you are creating a scaling policy.

The following policy types are supported:

TargetTrackingScaling —Not supported for Amazon EMR

StepScaling —Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.

For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .

Possible values:

  • StepScaling

  • TargetTrackingScaling

--step-scaling-policy-configuration (structure)

A step scaling policy.

This parameter is required if you are creating a policy and the policy type is StepScaling .

AdjustmentType -> (string)

Specifies how the ScalingAdjustment value in a StepAdjustment is interpreted (for example, an absolute number or a percentage). The valid values are ChangeInCapacity , ExactCapacity , and PercentChangeInCapacity .

AdjustmentType is required if you are adding a new step scaling policy configuration.

StepAdjustments -> (list)

A set of adjustments that enable you to scale based on the size of the alarm breach.

At least one step adjustment is required if you are adding a new step scaling policy configuration.

(structure)

Represents a step adjustment for a StepScalingPolicyConfiguration . Describes an adjustment based on the difference between the value of the aggregated CloudWatch metric and the breach threshold that you’ve defined for the alarm.

For the following examples, suppose that you have an alarm with a breach threshold of 50:

  • To trigger the adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.

  • To trigger the adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.

There are a few rules for the step adjustments for your step policy:

  • The ranges of your step adjustments can’t overlap or have a gap.

  • At most one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.

  • At most one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.

  • The upper and lower bound can’t be null in the same step adjustment.

MetricIntervalLowerBound -> (double)

The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.

MetricIntervalUpperBound -> (double)

The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.

The upper bound must be greater than the lower bound.

ScalingAdjustment -> (integer)

The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a positive value.

MinAdjustmentMagnitude -> (integer)

The minimum value to scale by when the adjustment type is PercentChangeInCapacity . For example, suppose that you create a step scaling policy to scale out an Amazon ECS service by 25 percent and you specify a MinAdjustmentMagnitude of 2. If the service has 4 tasks and the scaling policy is performed, 25 percent of 4 is 1. However, because you specified a MinAdjustmentMagnitude of 2, Application Auto Scaling scales out the service by 2 tasks.

Cooldown -> (integer)

The amount of time, in seconds, to wait for a previous scaling activity to take effect.

With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. For example, when an alarm triggers a step scaling policy to increase the capacity by 2, the scaling activity completes successfully, and a cooldown period starts. If the alarm triggers again during the cooldown period but at a more aggressive step adjustment of 3, the previous increase of 2 is considered part of the current capacity. Therefore, only 1 is added to the capacity.

With scale-in policies, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the cooldown period after a scale-in activity, Application Auto Scaling scales out the target immediately. In this case, the cooldown period for the scale-in activity stops and doesn’t complete.

Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:

  • AppStream 2.0 fleets

  • Aurora DB clusters

  • ECS services

  • EMR clusters

  • Neptune clusters

  • SageMaker endpoint variants

  • Spot Fleets

  • Custom resources

For all other scalable targets, the default value is 0:

  • Amazon Comprehend document classification and entity recognizer endpoints

  • DynamoDB tables and global secondary indexes

  • Amazon Keyspaces tables

  • Lambda provisioned concurrency

  • Amazon MSK broker storage

MetricAggregationType -> (string)

The aggregation type for the CloudWatch metrics. Valid values are Minimum , Maximum , and Average . If the aggregation type is null, the value is treated as Average .

Shorthand Syntax:

AdjustmentType=string,StepAdjustments=[{MetricIntervalLowerBound=double,MetricIntervalUpperBound=double,ScalingAdjustment=integer},{MetricIntervalLowerBound=double,MetricIntervalUpperBound=double,ScalingAdjustment=integer}],MinAdjustmentMagnitude=integer,Cooldown=integer,MetricAggregationType=string

JSON Syntax:

{
  "AdjustmentType": "ChangeInCapacity"|"PercentChangeInCapacity"|"ExactCapacity",
  "StepAdjustments": [
    {
      "MetricIntervalLowerBound": double,
      "MetricIntervalUpperBound": double,
      "ScalingAdjustment": integer
    }
    ...
  ],
  "MinAdjustmentMagnitude": integer,
  "Cooldown": integer,
  "MetricAggregationType": "Average"|"Minimum"|"Maximum"
}

--target-tracking-scaling-policy-configuration (structure)

A target tracking scaling policy. Includes support for predefined or customized metrics.

This parameter is required if you are creating a policy and the policy type is TargetTrackingScaling .

TargetValue -> (double)

The target value for the metric. Although this property accepts numbers of type Double, it won’t accept values that are either too small or too large. Values must be in the range of -2^360 to 2^360. The value must be a valid number based on the choice of metric. For example, if the metric is CPU utilization, then the target value is a percent value that represents how much of the CPU can be used before scaling out.

Note

If the scaling policy specifies the ALBRequestCountPerTarget predefined metric, specify the target utilization as the optimal average request count per target during any one-minute interval.

PredefinedMetricSpecification -> (structure)

A predefined metric. You can specify either a predefined metric or a customized metric.

PredefinedMetricType -> (string)

The metric type. The ALBRequestCountPerTarget metric type applies only to Spot Fleets and ECS services.

ResourceLabel -> (string)

Identifies the resource associated with the metric type. You can’t specify a resource label unless the metric type is ALBRequestCountPerTarget and there is a target group attached to the Spot Fleet or ECS service.

You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:

app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff .

Where:

  • app/<load-balancer-name>/<load-balancer-id> is the final portion of the load balancer ARN

  • targetgroup/<target-group-name>/<target-group-id> is the final portion of the target group ARN.

To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.

CustomizedMetricSpecification -> (structure)

A customized metric. You can specify either a predefined metric or a customized metric.

MetricName -> (string)

The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics .

Namespace -> (string)

The namespace of the metric.

Dimensions -> (list)

The dimensions of the metric.

Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.

(structure)

Describes the dimension names and values associated with a metric.

Name -> (string)

The name of the dimension.

Value -> (string)

The value of the dimension.

Statistic -> (string)

The statistic of the metric.

Unit -> (string)

The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .

ScaleOutCooldown -> (integer)

The amount of time, in seconds, to wait for a previous scale-out activity to take effect.

With the scale-out cooldown period , the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a target tracking scaling policy, it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, the capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity.

Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:

  • AppStream 2.0 fleets

  • Aurora DB clusters

  • ECS services

  • EMR clusters

  • Neptune clusters

  • SageMaker endpoint variants

  • Spot Fleets

  • Custom resources

For all other scalable targets, the default value is 0:

  • Amazon Comprehend document classification and entity recognizer endpoints

  • DynamoDB tables and global secondary indexes

  • Amazon Keyspaces tables

  • Lambda provisioned concurrency

  • Amazon MSK broker storage

ScaleInCooldown -> (integer)

The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.

With the scale-in cooldown period , the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the scale-in cooldown period, Application Auto Scaling scales out the target immediately. In this case, the scale-in cooldown period stops and doesn’t complete.

Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:

  • AppStream 2.0 fleets

  • Aurora DB clusters

  • ECS services

  • EMR clusters

  • Neptune clusters

  • SageMaker endpoint variants

  • Spot Fleets

  • Custom resources

For all other scalable targets, the default value is 0:

  • Amazon Comprehend document classification and entity recognizer endpoints

  • DynamoDB tables and global secondary indexes

  • Amazon Keyspaces tables

  • Lambda provisioned concurrency

  • Amazon MSK broker storage

DisableScaleIn -> (boolean)

Indicates whether scale in by the target tracking scaling policy is disabled. If the value is true , scale in is disabled and the target tracking scaling policy won’t remove capacity from the scalable target. Otherwise, scale in is enabled and the target tracking scaling policy can remove capacity from the scalable target. The default value is false .

JSON Syntax:

{
  "TargetValue": double,
  "PredefinedMetricSpecification": {
    "PredefinedMetricType": "DynamoDBReadCapacityUtilization"|"DynamoDBWriteCapacityUtilization"|"ALBRequestCountPerTarget"|"RDSReaderAverageCPUUtilization"|"RDSReaderAverageDatabaseConnections"|"EC2SpotFleetRequestAverageCPUUtilization"|"EC2SpotFleetRequestAverageNetworkIn"|"EC2SpotFleetRequestAverageNetworkOut"|"SageMakerVariantInvocationsPerInstance"|"ECSServiceAverageCPUUtilization"|"ECSServiceAverageMemoryUtilization"|"AppStreamAverageCapacityUtilization"|"ComprehendInferenceUtilization"|"LambdaProvisionedConcurrencyUtilization"|"CassandraReadCapacityUtilization"|"CassandraWriteCapacityUtilization"|"KafkaBrokerStorageUtilization"|"ElastiCachePrimaryEngineCPUUtilization"|"ElastiCacheReplicaEngineCPUUtilization"|"ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage"|"NeptuneReaderAverageCPUUtilization",
    "ResourceLabel": "string"
  },
  "CustomizedMetricSpecification": {
    "MetricName": "string",
    "Namespace": "string",
    "Dimensions": [
      {
        "Name": "string",
        "Value": "string"
      }
      ...
    ],
    "Statistic": "Average"|"Minimum"|"Maximum"|"SampleCount"|"Sum",
    "Unit": "string"
  },
  "ScaleOutCooldown": integer,
  "ScaleInCooldown": integer,
  "DisableScaleIn": true|false
}

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.

Global Options

--debug (boolean)

Turn on debug logging.

--endpoint-url (string)

Override command’s default URL with the given URL.

--no-verify-ssl (boolean)

By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.

--no-paginate (boolean)

Disable automatic pagination.

--output (string)

The formatting style for command output.

  • json

  • text

  • table

  • yaml

  • yaml-stream

--query (string)

A JMESPath query to use in filtering the response data.

--profile (string)

Use a specific profile from your credential file.

--region (string)

The region to use. Overrides config/env settings.

--version (string)

Display the version of this tool.

--color (string)

Turn on/off color output.

  • on

  • off

  • auto

--no-sign-request (boolean)

Do not sign requests. Credentials will not be loaded if this argument is provided.

--ca-bundle (string)

The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.

--cli-read-timeout (int)

The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.

--cli-connect-timeout (int)

The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.

--cli-binary-format (string)

The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb:// will always be treated as binary and use the file contents directly regardless of the cli-binary-format setting. When using file:// the file contents will need to properly formatted for the configured cli-binary-format.

  • base64

  • raw-in-base64-out

--no-cli-pager (boolean)

Disable cli pager for output.

--cli-auto-prompt (boolean)

Automatically prompt for CLI input parameters.

--no-cli-auto-prompt (boolean)

Disable automatically prompt for CLI input parameters.

Examples

Note

To use the following examples, you must have the AWS CLI installed and configured. See the Getting started guide in the AWS CLI User Guide for more information.

Unless otherwise stated, all examples have unix-like quotation rules. These examples will need to be adapted to your terminal’s quoting rules. See Using quotation marks with strings in the AWS CLI User Guide .

Example 1: To apply a target tracking scaling policy with a predefined metric specification

The following put-scaling-policy example applies a target tracking scaling policy with a predefined metric specification to an Amazon ECS service called web-app in the default cluster. The policy keeps the average CPU utilization of the service at 75 percent, with scale-out and scale-in cooldown periods of 60 seconds. The output contains the ARNs and names of the two CloudWatch alarms created on your behalf.

aws application-autoscaling put-scaling-policy --service-namespace ecs \
--scalable-dimension ecs:service:DesiredCount \
--resource-id service/default/web-app \
--policy-name cpu75-target-tracking-scaling-policy --policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://config.json

This example assumes that you have a config.json file in the current directory with the following contents:

{
     "TargetValue": 75.0,
     "PredefinedMetricSpecification": {
         "PredefinedMetricType": "ECSServiceAverageCPUUtilization"
     },
     "ScaleOutCooldown": 60,
    "ScaleInCooldown": 60
}

Output:

{
    "PolicyARN": "arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/ecs/service/default/web-app:policyName/cpu75-target-tracking-scaling-policy",
    "Alarms": [
        {
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca",
            "AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca"
        },
        {
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d",
            "AlarmName": "TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d"
        }
    ]
}

Example 2: To apply a target tracking scaling policy with a customized metric specification

The following put-scaling-policy example applies a target tracking scaling policy with a customized metric specification to an Amazon ECS service called web-app in the default cluster. The policy keeps the average utilization of the service at 75 percent, with scale-out and scale-in cooldown periods of 60 seconds. The output contains the ARNs and names of the two CloudWatch alarms created on your behalf.

aws application-autoscaling put-scaling-policy --service-namespace ecs \
--scalable-dimension ecs:service:DesiredCount \
--resource-id service/default/web-app \
--policy-name cms75-target-tracking-scaling-policy
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://config.json

This example assumes that you have a config.json file in the current directory with the following contents:

{
    "TargetValue":75.0,
    "CustomizedMetricSpecification":{
        "MetricName":"MyUtilizationMetric",
        "Namespace":"MyNamespace",
        "Dimensions": [
            {
                "Name":"MyOptionalMetricDimensionName",
                "Value":"MyOptionalMetricDimensionValue"
            }
        ],
        "Statistic":"Average",
        "Unit":"Percent"
    },
    "ScaleOutCooldown": 60,
    "ScaleInCooldown": 60
}

Output:

{
    "PolicyARN": "arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy: 8784a896-b2ba-47a1-b08c-27301cc499a1:resource/ecs/service/default/web-app:policyName/cms75-target-tracking-scaling-policy",
    "Alarms": [
        {
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmHigh-9bc77b56-0571-4276-ba0f-d4178882e0a0",
            "AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-9bc77b56-0571-4276-ba0f-d4178882e0a0"
        },
        {
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmLow-9b6ad934-6d37-438e-9e05-02836ddcbdc4",
            "AlarmName": "TargetTracking-service/default/web-app-AlarmLow-9b6ad934-6d37-438e-9e05-02836ddcbdc4"
        }
    ]
}

Example 3: To apply a target tracking scaling policy for scale out only

The following put-scaling-policy example applies a target tracking scaling policy to an Amazon ECS service called web-app in the default cluster. The policy is used to scale out the ECS service when the RequestCountPerTarget metric from the Application Load Balancer exceeds the threshold. The output contains the ARN and name of the CloudWatch alarm created on your behalf.

aws application-autoscaling put-scaling-policy \
    --service-namespace ecs \
    --scalable-dimension ecs:service:DesiredCount \
    --resource-id service/default/web-app \
    --policy-name alb-scale-out-target-tracking-scaling-policy \
    --policy-type TargetTrackingScaling \
    --target-tracking-scaling-policy-configuration file://config.json

Contents of config.json:

{
     "TargetValue": 1000.0,
     "PredefinedMetricSpecification": {
         "PredefinedMetricType": "ALBRequestCountPerTarget",
         "ResourceLabel": "app/EC2Co-EcsEl-1TKLTMITMM0EO/f37c06a68c1748aa/targetgroup/EC2Co-Defau-LDNM7Q3ZH1ZN/6d4ea56ca2d6a18d"
     },
     "ScaleOutCooldown": 60,
    "ScaleInCooldown": 60,
    "DisableScaleIn": true
}

Output:

{
    "PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/ecs/service/default/web-app:policyName/alb-scale-out-target-tracking-scaling-policy",
    "Alarms": [
        {
            "AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca",
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:123456789012:alarm:TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca"
        }
    ]
}

For more information, see Target Tracking Scaling Policies for Application Auto Scaling in the AWS Application Auto Scaling User Guide.

Output

PolicyARN -> (string)

The Amazon Resource Name (ARN) of the resulting scaling policy.

Alarms -> (list)

The CloudWatch alarms created for the target tracking scaling policy.

(structure)

Represents a CloudWatch alarm associated with a scaling policy.

AlarmName -> (string)

The name of the alarm.

AlarmARN -> (string)

The Amazon Resource Name (ARN) of the alarm.