[ aws . application-autoscaling ]
Creates or updates a scaling policy for an Application Auto Scaling scalable target.
Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target.
Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30.
We recommend caution, however, when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior. For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked. After the scale-in activity completes, the target tracking policy could instruct the scalable target to scale out again.
For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .
Note
If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
put-scaling-policy
--policy-name <value>
--service-namespace <value>
--resource-id <value>
--scalable-dimension <value>
[--policy-type <value>]
[--step-scaling-policy-configuration <value>]
[--target-tracking-scaling-policy-configuration <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--policy-name
(string)
The name of the scaling policy.
--service-namespace
(string)
The namespace of the AWS service that provides the resource. For a resource provided by your own application or service, use
custom-resource
instead.Possible values:
ecs
elasticmapreduce
ec2
appstream
dynamodb
rds
sagemaker
custom-resource
comprehend
lambda
cassandra
kafka
--resource-id
(string)
The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.
ECS service - The resource type is
service
and the unique identifier is the cluster name and service name. Example:service/default/sample-webapp
.Spot Fleet request - The resource type is
spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example:spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.EMR cluster - The resource type is
instancegroup
and the unique identifier is the cluster ID and instance group ID. Example:instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.AppStream 2.0 fleet - The resource type is
fleet
and the unique identifier is the fleet name. Example:fleet/sample-fleet
.DynamoDB table - The resource type is
table
and the unique identifier is the table name. Example:table/my-table
.DynamoDB global secondary index - The resource type is
index
and the unique identifier is the index name. Example:table/my-table/index/my-table-index
.Aurora DB cluster - The resource type is
cluster
and the unique identifier is the cluster name. Example:cluster:my-db-cluster
.Amazon SageMaker endpoint variant - The resource type is
variant
and the unique identifier is the resource ID. Example:endpoint/my-end-point/variant/KMeansClustering
.Custom resources are not supported with a resource type. This parameter must specify the
OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository .Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example:
arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example:
arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.Lambda provisioned concurrency - The resource type is
function
and the unique identifier is the function name with a function version or alias name suffix that is not$LATEST
. Example:function:my-function:prod
orfunction:my-function:1
.Amazon Keyspaces table - The resource type is
table
and the unique identifier is the table name. Example:keyspace/mykeyspace/table/mytable
.Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example:
arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
--scalable-dimension
(string)
The scalable dimension. This string consists of the service namespace, resource type, and scaling property.
ecs:service:DesiredCount
- The desired task count of an ECS service.
ec2:spot-fleet-request:TargetCapacity
- The target capacity of a Spot Fleet request.
elasticmapreduce:instancegroup:InstanceCount
- The instance count of an EMR Instance Group.
appstream:fleet:DesiredCapacity
- The desired capacity of an AppStream 2.0 fleet.
dynamodb:table:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB table.
dynamodb:table:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB table.
dynamodb:index:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB global secondary index.
dynamodb:index:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB global secondary index.
rds:cluster:ReadReplicaCount
- The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.
sagemaker:variant:DesiredInstanceCount
- The number of EC2 instances for an Amazon SageMaker model endpoint variant.
custom-resource:ResourceType:Property
- The scalable dimension for a custom resource provided by your own application or service.
comprehend:document-classifier-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend document classification endpoint.
comprehend:entity-recognizer-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend entity recognizer endpoint.
lambda:function:ProvisionedConcurrency
- The provisioned concurrency for a Lambda function.
cassandra:table:ReadCapacityUnits
- The provisioned read capacity for an Amazon Keyspaces table.
cassandra:table:WriteCapacityUnits
- The provisioned write capacity for an Amazon Keyspaces table.
kafka:broker-storage:VolumeSize
- The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.Possible values:
ecs:service:DesiredCount
ec2:spot-fleet-request:TargetCapacity
elasticmapreduce:instancegroup:InstanceCount
appstream:fleet:DesiredCapacity
dynamodb:table:ReadCapacityUnits
dynamodb:table:WriteCapacityUnits
dynamodb:index:ReadCapacityUnits
dynamodb:index:WriteCapacityUnits
rds:cluster:ReadReplicaCount
sagemaker:variant:DesiredInstanceCount
custom-resource:ResourceType:Property
comprehend:document-classifier-endpoint:DesiredInferenceUnits
comprehend:entity-recognizer-endpoint:DesiredInferenceUnits
lambda:function:ProvisionedConcurrency
cassandra:table:ReadCapacityUnits
cassandra:table:WriteCapacityUnits
kafka:broker-storage:VolumeSize
--policy-type
(string)
The policy type. This parameter is required if you are creating a scaling policy.
The following policy types are supported:
TargetTrackingScaling
—Not supported for Amazon EMR
StepScaling
—Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces (for Apache Cassandra), or Amazon MSK.For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .
Possible values:
StepScaling
TargetTrackingScaling
--step-scaling-policy-configuration
(structure)
A step scaling policy.
This parameter is required if you are creating a policy and the policy type is
StepScaling
.AdjustmentType -> (string)
Specifies how the
ScalingAdjustment
value in a StepAdjustment is interpreted (for example, an absolute number or a percentage). The valid values areChangeInCapacity
,ExactCapacity
, andPercentChangeInCapacity
.
AdjustmentType
is required if you are adding a new step scaling policy configuration.StepAdjustments -> (list)
A set of adjustments that enable you to scale based on the size of the alarm breach.
At least one step adjustment is required if you are adding a new step scaling policy configuration.
(structure)
Represents a step adjustment for a StepScalingPolicyConfiguration . Describes an adjustment based on the difference between the value of the aggregated CloudWatch metric and the breach threshold that you’ve defined for the alarm.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
To trigger the adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.
To trigger the adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.
There are a few rules for the step adjustments for your step policy:
The ranges of your step adjustments can’t overlap or have a gap.
At most one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.
At most one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.
The upper and lower bound can’t be null in the same step adjustment.
MetricIntervalLowerBound -> (double)
The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
MetricIntervalUpperBound -> (double)
The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
ScalingAdjustment -> (integer)
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a positive value.
MinAdjustmentMagnitude -> (integer)
The minimum value to scale by when the adjustment type is
PercentChangeInCapacity
. For example, suppose that you create a step scaling policy to scale out an Amazon ECS service by 25 percent and you specify aMinAdjustmentMagnitude
of 2. If the service has 4 tasks and the scaling policy is performed, 25 percent of 4 is 1. However, because you specified aMinAdjustmentMagnitude
of 2, Application Auto Scaling scales out the service by 2 tasks.Cooldown -> (integer)
The amount of time, in seconds, to wait for a previous scaling activity to take effect.
With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. For example, when an alarm triggers a step scaling policy to increase the capacity by 2, the scaling activity completes successfully, and a cooldown period starts. If the alarm triggers again during the cooldown period but at a more aggressive step adjustment of 3, the previous increase of 2 is considered part of the current capacity. Therefore, only 1 is added to the capacity.
With scale-in policies, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the cooldown period after a scale-in activity, Application Auto Scaling scales out the target immediately. In this case, the cooldown period for the scale-in activity stops and doesn’t complete.
Application Auto Scaling provides a default value of 300 for the following scalable targets:
ECS services
Spot Fleet requests
EMR clusters
AppStream 2.0 fleets
Aurora DB clusters
Amazon SageMaker endpoint variants
Custom resources
For all other scalable targets, the default value is 0:
DynamoDB tables
DynamoDB global secondary indexes
Amazon Comprehend document classification and entity recognizer endpoints
Lambda provisioned concurrency
Amazon Keyspaces tables
Amazon MSK broker storage
MetricAggregationType -> (string)
The aggregation type for the CloudWatch metrics. Valid values are
Minimum
,Maximum
, andAverage
. If the aggregation type is null, the value is treated asAverage
.
Shorthand Syntax:
AdjustmentType=string,StepAdjustments=[{MetricIntervalLowerBound=double,MetricIntervalUpperBound=double,ScalingAdjustment=integer},{MetricIntervalLowerBound=double,MetricIntervalUpperBound=double,ScalingAdjustment=integer}],MinAdjustmentMagnitude=integer,Cooldown=integer,MetricAggregationType=string
JSON Syntax:
{
"AdjustmentType": "ChangeInCapacity"|"PercentChangeInCapacity"|"ExactCapacity",
"StepAdjustments": [
{
"MetricIntervalLowerBound": double,
"MetricIntervalUpperBound": double,
"ScalingAdjustment": integer
}
...
],
"MinAdjustmentMagnitude": integer,
"Cooldown": integer,
"MetricAggregationType": "Average"|"Minimum"|"Maximum"
}
--target-tracking-scaling-policy-configuration
(structure)
A target tracking scaling policy. Includes support for predefined or customized metrics.
This parameter is required if you are creating a policy and the policy type is
TargetTrackingScaling
.TargetValue -> (double)
The target value for the metric. Although this property accepts numbers of type Double, it won’t accept values that are either too small or too large. Values must be in the range of -2^360 to 2^360. The value must be a valid number based on the choice of metric. For example, if the metric is CPU utilization, then the target value is a percent value that represents how much of the CPU can be used before scaling out.
PredefinedMetricSpecification -> (structure)
A predefined metric. You can specify either a predefined metric or a customized metric.
PredefinedMetricType -> (string)
The metric type. The
ALBRequestCountPerTarget
metric type applies only to Spot Fleet requests and ECS services.ResourceLabel -> (string)
Identifies the resource associated with the metric type. You can’t specify a resource label unless the metric type is
ALBRequestCountPerTarget
and there is a target group attached to the Spot Fleet request or ECS service.You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format is app/<load-balancer-name>/<load-balancer-id>/targetgroup/<target-group-name>/<target-group-id>, where:
app/<load-balancer-name>/<load-balancer-id> is the final portion of the load balancer ARN
targetgroup/<target-group-name>/<target-group-id> is the final portion of the target group ARN.
This is an example: app/EC2Co-EcsEl-1TKLTMITMM0EO/f37c06a68c1748aa/targetgroup/EC2Co-Defau-LDNM7Q3ZH1ZN/6d4ea56ca2d6a18d.
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedMetricSpecification -> (structure)
A customized metric. You can specify either a predefined metric or a customized metric.
MetricName -> (string)
The name of the metric.
Namespace -> (string)
The namespace of the metric.
Dimensions -> (list)
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(structure)
Describes the dimension names and values associated with a metric.
Name -> (string)
The name of the dimension.
Value -> (string)
The value of the dimension.
Statistic -> (string)
The statistic of the metric.
Unit -> (string)
The unit of the metric.
ScaleOutCooldown -> (integer)
The amount of time, in seconds, to wait for a previous scale-out activity to take effect.
With the scale-out cooldown period , the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a target tracking scaling policy, it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, the capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity.
Application Auto Scaling provides a default value of 300 for the following scalable targets:
ECS services
Spot Fleet requests
EMR clusters
AppStream 2.0 fleets
Aurora DB clusters
Amazon SageMaker endpoint variants
Custom resources
For all other scalable targets, the default value is 0:
DynamoDB tables
DynamoDB global secondary indexes
Amazon Comprehend document classification and entity recognizer endpoints
Lambda provisioned concurrency
Amazon Keyspaces tables
Amazon MSK broker storage
ScaleInCooldown -> (integer)
The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.
With the scale-in cooldown period , the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the scale-in cooldown period, Application Auto Scaling scales out the target immediately. In this case, the scale-in cooldown period stops and doesn’t complete.
Application Auto Scaling provides a default value of 300 for the following scalable targets:
ECS services
Spot Fleet requests
EMR clusters
AppStream 2.0 fleets
Aurora DB clusters
Amazon SageMaker endpoint variants
Custom resources
For all other scalable targets, the default value is 0:
DynamoDB tables
DynamoDB global secondary indexes
Amazon Comprehend document classification and entity recognizer endpoints
Lambda provisioned concurrency
Amazon Keyspaces tables
Amazon MSK broker storage
DisableScaleIn -> (boolean)
Indicates whether scale in by the target tracking scaling policy is disabled. If the value is
true
, scale in is disabled and the target tracking scaling policy won’t remove capacity from the scalable target. Otherwise, scale in is enabled and the target tracking scaling policy can remove capacity from the scalable target. The default value isfalse
.
JSON Syntax:
{
"TargetValue": double,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBReadCapacityUtilization"|"DynamoDBWriteCapacityUtilization"|"ALBRequestCountPerTarget"|"RDSReaderAverageCPUUtilization"|"RDSReaderAverageDatabaseConnections"|"EC2SpotFleetRequestAverageCPUUtilization"|"EC2SpotFleetRequestAverageNetworkIn"|"EC2SpotFleetRequestAverageNetworkOut"|"SageMakerVariantInvocationsPerInstance"|"ECSServiceAverageCPUUtilization"|"ECSServiceAverageMemoryUtilization"|"AppStreamAverageCapacityUtilization"|"ComprehendInferenceUtilization"|"LambdaProvisionedConcurrencyUtilization"|"CassandraReadCapacityUtilization"|"CassandraWriteCapacityUtilization"|"KafkaBrokerStorageUtilization",
"ResourceLabel": "string"
},
"CustomizedMetricSpecification": {
"MetricName": "string",
"Namespace": "string",
"Dimensions": [
{
"Name": "string",
"Value": "string"
}
...
],
"Statistic": "Average"|"Minimum"|"Maximum"|"SampleCount"|"Sum",
"Unit": "string"
},
"ScaleOutCooldown": integer,
"ScaleInCooldown": integer,
"DisableScaleIn": true|false
}
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.
Example 1: To apply a target tracking scaling policy with a predefined metric specification
The following put-scaling-policy
example applies a target tracking scaling policy with a predefined metric specification to an Amazon ECS service called web-app in the default cluster. The policy keeps the average CPU utilization of the service at 75 percent, with scale-out and scale-in cooldown periods of 60 seconds. The output contains the ARNs and names of the two CloudWatch alarms created on your behalf.
aws application-autoscaling put-scaling-policy --service-namespace ecs \
--scalable-dimension ecs:service:DesiredCount \
--resource-id service/default/web-app \
--policy-name cpu75-target-tracking-scaling-policy --policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://config.json
This example assumes that you have a config.json file in the current directory with the following contents:
{
"TargetValue": 75.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ECSServiceAverageCPUUtilization"
},
"ScaleOutCooldown": 60,
"ScaleInCooldown": 60
}
Output:
{
"PolicyARN": "arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/ecs/service/default/web-app:policyName/cpu75-target-tracking-scaling-policy",
"Alarms": [
{
"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca",
"AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca"
},
{
"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d",
"AlarmName": "TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d"
}
]
}
Example 2: To apply a target tracking scaling policy with a customized metric specification
The following put-scaling-policy
example applies a target tracking scaling policy with a customized metric specification to an Amazon ECS service called web-app in the default cluster. The policy keeps the average utilization of the service at 75 percent, with scale-out and scale-in cooldown periods of 60 seconds. The output contains the ARNs and names of the two CloudWatch alarms created on your behalf.
aws application-autoscaling put-scaling-policy --service-namespace ecs \
--scalable-dimension ecs:service:DesiredCount \
--resource-id service/default/web-app \
--policy-name cms75-target-tracking-scaling-policy
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://config.json
This example assumes that you have a config.json file in the current directory with the following contents:
{
"TargetValue":75.0,
"CustomizedMetricSpecification":{
"MetricName":"MyUtilizationMetric",
"Namespace":"MyNamespace",
"Dimensions": [
{
"Name":"MyOptionalMetricDimensionName",
"Value":"MyOptionalMetricDimensionValue"
}
],
"Statistic":"Average",
"Unit":"Percent"
},
"ScaleOutCooldown": 60,
"ScaleInCooldown": 60
}
Output:
{
"PolicyARN": "arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy: 8784a896-b2ba-47a1-b08c-27301cc499a1:resource/ecs/service/default/web-app:policyName/cms75-target-tracking-scaling-policy",
"Alarms": [
{
"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmHigh-9bc77b56-0571-4276-ba0f-d4178882e0a0",
"AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-9bc77b56-0571-4276-ba0f-d4178882e0a0"
},
{
"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmLow-9b6ad934-6d37-438e-9e05-02836ddcbdc4",
"AlarmName": "TargetTracking-service/default/web-app-AlarmLow-9b6ad934-6d37-438e-9e05-02836ddcbdc4"
}
]
}
Example 3: To apply a target tracking scaling policy for scale out only
The following put-scaling-policy
example applies a target tracking scaling policy to an Amazon ECS service called web-app
in the default cluster. The policy is used to scale out the ECS service when the RequestCountPerTarget
metric from the Application Load Balancer exceeds the threshold. The output contains the ARN and name of the CloudWatch alarm created on your behalf.
aws application-autoscaling put-scaling-policy \
--service-namespace ecs \
--scalable-dimension ecs:service:DesiredCount \
--resource-id service/default/web-app \
--policy-name alb-scale-out-target-tracking-scaling-policy \
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://config.json
Contents of config.json
:
{
"TargetValue": 1000.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ALBRequestCountPerTarget",
"ResourceLabel": "app/EC2Co-EcsEl-1TKLTMITMM0EO/f37c06a68c1748aa/targetgroup/EC2Co-Defau-LDNM7Q3ZH1ZN/6d4ea56ca2d6a18d"
},
"ScaleOutCooldown": 60,
"ScaleInCooldown": 60,
"DisableScaleIn": true
}
Output:
{
"PolicyARN": "arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/ecs/service/default/web-app:policyName/alb-scale-out-target-tracking-scaling-policy",
"Alarms": [
{
"AlarmName": "TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca",
"AlarmARN": "arn:aws:cloudwatch:us-west-2:123456789012:alarm:TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca"
}
]
}
For more information, see Target Tracking Scaling Policies for Application Auto Scaling in the AWS Application Auto Scaling User Guide.
PolicyARN -> (string)
The Amazon Resource Name (ARN) of the resulting scaling policy.
Alarms -> (list)
The CloudWatch alarms created for the target tracking scaling policy.
(structure)
Represents a CloudWatch alarm associated with a scaling policy.
AlarmName -> (string)
The name of the alarm.
AlarmARN -> (string)
The Amazon Resource Name (ARN) of the alarm.