Returns information about a model compilation job.
To create a model compilation job, use CreateCompilationJob . To get information about multiple model compilation jobs, use ListCompilationJobs .
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
describe-compilation-job
--compilation-job-name <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--cli-auto-prompt <value>]
--compilation-job-name
(string)
The name of the model compilation job that you want information about.
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
--cli-auto-prompt
(boolean)
Automatically prompt for CLI input parameters.
See ‘aws help’ for descriptions of global parameters.
CompilationJobName -> (string)
The name of the model compilation job.
CompilationJobArn -> (string)
The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker assumes to perform the model compilation job.
CompilationJobStatus -> (string)
The status of the model compilation job.
CompilationStartTime -> (timestamp)
The time when the model compilation job started the
CompilationJob
instances.You are billed for the time between this timestamp and the timestamp in the DescribeCompilationJobResponse$CompilationEndTime field. In Amazon CloudWatch Logs, the start time might be later than this time. That’s because it takes time to download the compilation job, which depends on the size of the compilation job container.
CompilationEndTime -> (timestamp)
The time when the model compilation job on a compilation job instance ended. For a successful or stopped job, this is when the job’s model artifacts have finished uploading. For a failed job, this is when Amazon SageMaker detected that the job failed.
StoppingCondition -> (structure)
Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.
MaxRuntimeInSeconds -> (integer)
The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.
MaxWaitTimeInSeconds -> (integer)
The maximum length of time, in seconds, how long you are willing to wait for a managed spot training job to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the training job runs. It must be equal to or greater than
MaxRuntimeInSeconds
.
CreationTime -> (timestamp)
The time that the model compilation job was created.
LastModifiedTime -> (timestamp)
The time that the status of the model compilation job was last modified.
FailureReason -> (string)
If a model compilation job failed, the reason it failed.
ModelArtifacts -> (structure)
Information about the location in Amazon S3 that has been configured for storing the model artifacts used in the compilation job.
S3ModelArtifacts -> (string)
The path of the S3 object that contains the model artifacts. For example,
s3://bucket-name/keynameprefix/model.tar.gz
.
RoleArn -> (string)
The Amazon Resource Name (ARN) of the model compilation job.
InputConfig -> (structure)
Information about the location in Amazon S3 of the input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.
S3Uri -> (string)
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
DataInputConfig -> (string)
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console,
{"input":[1,1024,1024,3]}
If using the CLI,
{\"input\":[1,1024,1024,3]}
Examples for two inputs:
If using the console,
{"data1": [1,28,28,1], "data2":[1,28,28,1]}
If using the CLI,
{\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format,DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console,
{"input_1":[1,3,224,224]}
If using the CLI,
{\"input_1\":[1,3,224,224]}
Examples for two inputs:
If using the console,
{"input_1": [1,3,224,224], "input_2":[1,3,224,224]}
If using the CLI,
{\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
MXNET/ONNX
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console,
{"data":[1,3,1024,1024]}
If using the CLI,
{\"data\":[1,3,1024,1024]}
Examples for two inputs:
If using the console,
{"var1": [1,1,28,28], "var2":[1,1,28,28]}
If using the CLI,
{\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.
Examples for one input in dictionary format:
If using the console,
{"input0":[1,3,224,224]}
If using the CLI,
{\"input0\":[1,3,224,224]}
Example for one input in list format:
[[1,3,224,224]]
Examples for two inputs in dictionary format:
If using the console,
{"input0":[1,3,224,224], "input1":[1,3,224,224]}
If using the CLI,
{\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
Example for two inputs in list format:
[[1,3,224,224], [1,3,224,224]]
XGBOOST
: input data name and shape are not needed.Framework -> (string)
Identifies the framework in which the model was trained. For example: TENSORFLOW.
OutputConfig -> (structure)
Information about the output location for the compiled model and the target device that the model runs on.
S3OutputLocation -> (string)
Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.
TargetDevice -> (string)
Identifies the device that you want to run your model on after it has been compiled. For example: ml_c5.