Returns the description of an endpoint configuration created using the CreateEndpointConfig
API.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
describe-endpoint-config
--endpoint-config-name <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--endpoint-config-name
(string)
The name of the endpoint configuration.
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.
See ‘aws help’ for descriptions of global parameters.
EndpointConfigName -> (string)
Name of the SageMaker endpoint configuration.
EndpointConfigArn -> (string)
The Amazon Resource Name (ARN) of the endpoint configuration.
ProductionVariants -> (list)
An array of
ProductionVariant
objects, one for each model that you want to host at this endpoint.(structure)
Identifies a model that you want to host and the resources chosen to deploy for hosting it. If you are deploying multiple models, tell SageMaker how to distribute traffic among the models by specifying variant weights.
VariantName -> (string)
The name of the production variant.
ModelName -> (string)
The name of the model that you want to host. This is the name that you specified when creating the model.
InitialInstanceCount -> (integer)
Number of instances to launch initially.
InstanceType -> (string)
The ML compute instance type.
InitialVariantWeight -> (float)
Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the
VariantWeight
to the sum of allVariantWeight
values across all ProductionVariants. If unspecified, it defaults to 1.0.AcceleratorType -> (string)
The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker .
CoreDumpConfig -> (structure)
Specifies configuration for a core dump from the model container when the process crashes.
DestinationS3Uri -> (string)
The Amazon S3 bucket to send the core dump to.
KmsKeyId -> (string)
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that SageMaker uses to encrypt the core dump data at rest using Amazon S3 server-side encryption. The
KmsKeyId
can be any of the following formats:
// KMS Key ID
"1234abcd-12ab-34cd-56ef-1234567890ab"
// Amazon Resource Name (ARN) of a KMS Key
"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
// KMS Key Alias
"alias/ExampleAlias"
// Amazon Resource Name (ARN) of a KMS Key Alias
"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"
If you use a KMS key ID or an alias of your KMS key, the SageMaker execution role must include permissions to call
kms:Encrypt
. If you don’t provide a KMS key ID, SageMaker uses the default KMS key for Amazon S3 for your role’s account. SageMaker uses server-side encryption with KMS-managed keys forOutputDataConfig
. If you use a bucket policy with ans3:PutObject
permission that only allows objects with server-side encryption, set the condition key ofs3:x-amz-server-side-encryption
to"aws:kms"
. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.The KMS key policy must grant permission to the IAM role that you specify in your
CreateEndpoint
andUpdateEndpoint
requests. For more information, see Using Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer Guide .ServerlessConfig -> (structure)
The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
MemorySizeInMB -> (integer)
The memory size of your serverless endpoint. Valid values are in 1 GB increments: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB.
MaxConcurrency -> (integer)
The maximum number of concurrent invocations your serverless endpoint can process.
DataCaptureConfig -> (structure)
Configuration to control how SageMaker captures inference data.
EnableCapture -> (boolean)
Whether data capture should be enabled or disabled (defaults to enabled).
InitialSamplingPercentage -> (integer)
The percentage of requests SageMaker will capture. A lower value is recommended for Endpoints with high traffic.
DestinationS3Uri -> (string)
The Amazon S3 location used to capture the data.
KmsKeyId -> (string)
The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
The KmsKeyId can be any of the following formats:
Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name:
alias/ExampleAlias
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
CaptureOptions -> (list)
Specifies data Model Monitor will capture. You can configure whether to collect only input, only output, or both
(structure)
Specifies data Model Monitor will capture.
CaptureMode -> (string)
Specify the boundary of data to capture.
CaptureContentTypeHeader -> (structure)
Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
CsvContentTypes -> (list)
The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
(string)
JsonContentTypes -> (list)
The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
(string)
KmsKeyId -> (string)
Amazon Web Services KMS key ID Amazon SageMaker uses to encrypt data when storing it on the ML storage volume attached to the instance.
CreationTime -> (timestamp)
A timestamp that shows when the endpoint configuration was created.
AsyncInferenceConfig -> (structure)
Returns the description of an endpoint configuration created using the `
CreateEndpointConfig
https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpointConfig.html`__ API.ClientConfig -> (structure)
Configures the behavior of the client used by SageMaker to interact with the model container during asynchronous inference.
MaxConcurrentInvocationsPerInstance -> (integer)
The maximum number of concurrent requests sent by the SageMaker client to the model container. If no value is provided, SageMaker chooses an optimal value.
OutputConfig -> (structure)
Specifies the configuration for asynchronous inference invocation outputs.
KmsKeyId -> (string)
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that SageMaker uses to encrypt the asynchronous inference output in Amazon S3.
S3OutputPath -> (string)
The Amazon S3 location to upload inference responses to.
NotificationConfig -> (structure)
Specifies the configuration for notifications of inference results for asynchronous inference.
SuccessTopic -> (string)
Amazon SNS topic to post a notification to when inference completes successfully. If no topic is provided, no notification is sent on success.
ErrorTopic -> (string)
Amazon SNS topic to post a notification to when inference fails. If no topic is provided, no notification is sent on failure.