[ aws . sagemaker ]

create-model

Description

Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.

Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job.

To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.

For an example that calls this method when deploying a model to SageMaker hosting services, see Create a Model (Amazon Web Services SDK for Python (Boto 3)).

To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.

In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.

See also: AWS API Documentation

Synopsis

  create-model
--model-name <value>
[--primary-container <value>]
[--containers <value>]
[--inference-execution-config <value>]
--execution-role-arn <value>
[--tags <value>]
[--vpc-config <value>]
[--enable-network-isolation | --no-enable-network-isolation]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
[--cli-binary-format <value>]
[--no-cli-pager]
[--cli-auto-prompt]
[--no-cli-auto-prompt]

Options

--model-name (string)

The name of the new model.

--primary-container (structure)

The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.

ContainerHostname -> (string)

This parameter is ignored for models that contain only a PrimaryContainer .

When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

Image -> (string)

The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker

ImageConfig -> (structure)

Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers

RepositoryAccessMode -> (string)

Set this to one of the following values:

  • Platform - The model image is hosted in Amazon ECR.

  • Vpc - The model image is hosted in a private Docker registry in your VPC.

RepositoryAuthConfig -> (structure)

(Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.

RepositoryCredentialsProviderArn -> (string)

The Amazon Resource Name (ARN) of an Amazon Web Services Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an Amazon Web Services Lambda function, see Create a Lambda function with the console in the Amazon Web Services Lambda Developer Guide .

Mode -> (string)

Whether the container hosts a single model or multiple models.

ModelDataUrl -> (string)

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters .

Note

The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.

If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide .

Warning

If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl .

Environment -> (map)

The environment variables to set in the Docker container. Each key and value in the Environment string to string map can have length of up to 1024. We support up to 16 entries in the map.

key -> (string)

value -> (string)

ModelPackageName -> (string)

The name or Amazon Resource Name (ARN) of the model package to use to create the model.

InferenceSpecificationName -> (string)

The inference specification name in the model package version.

MultiModelConfig -> (structure)

Specifies additional configuration for multi-model endpoints.

ModelCacheSetting -> (string)

Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled .

Shorthand Syntax:

ContainerHostname=string,Image=string,ImageConfig={RepositoryAccessMode=string,RepositoryAuthConfig={RepositoryCredentialsProviderArn=string}},Mode=string,ModelDataUrl=string,Environment={KeyName1=string,KeyName2=string},ModelPackageName=string,InferenceSpecificationName=string,MultiModelConfig={ModelCacheSetting=string}

JSON Syntax:

{
  "ContainerHostname": "string",
  "Image": "string",
  "ImageConfig": {
    "RepositoryAccessMode": "Platform"|"Vpc",
    "RepositoryAuthConfig": {
      "RepositoryCredentialsProviderArn": "string"
    }
  },
  "Mode": "SingleModel"|"MultiModel",
  "ModelDataUrl": "string",
  "Environment": {"string": "string"
    ...},
  "ModelPackageName": "string",
  "InferenceSpecificationName": "string",
  "MultiModelConfig": {
    "ModelCacheSetting": "Enabled"|"Disabled"
  }
}

--containers (list)

Specifies the containers in the inference pipeline.

(structure)

Describes the container, as part of model definition.

ContainerHostname -> (string)

This parameter is ignored for models that contain only a PrimaryContainer .

When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

Image -> (string)

The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker

ImageConfig -> (structure)

Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers

RepositoryAccessMode -> (string)

Set this to one of the following values:

  • Platform - The model image is hosted in Amazon ECR.

  • Vpc - The model image is hosted in a private Docker registry in your VPC.

RepositoryAuthConfig -> (structure)

(Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.

RepositoryCredentialsProviderArn -> (string)

The Amazon Resource Name (ARN) of an Amazon Web Services Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an Amazon Web Services Lambda function, see Create a Lambda function with the console in the Amazon Web Services Lambda Developer Guide .

Mode -> (string)

Whether the container hosts a single model or multiple models.

ModelDataUrl -> (string)

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters .

Note

The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.

If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide .

Warning

If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl .

Environment -> (map)

The environment variables to set in the Docker container. Each key and value in the Environment string to string map can have length of up to 1024. We support up to 16 entries in the map.

key -> (string)

value -> (string)

ModelPackageName -> (string)

The name or Amazon Resource Name (ARN) of the model package to use to create the model.

InferenceSpecificationName -> (string)

The inference specification name in the model package version.

MultiModelConfig -> (structure)

Specifies additional configuration for multi-model endpoints.

ModelCacheSetting -> (string)

Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled .

Shorthand Syntax:

ContainerHostname=string,Image=string,ImageConfig={RepositoryAccessMode=string,RepositoryAuthConfig={RepositoryCredentialsProviderArn=string}},Mode=string,ModelDataUrl=string,Environment={KeyName1=string,KeyName2=string},ModelPackageName=string,InferenceSpecificationName=string,MultiModelConfig={ModelCacheSetting=string} ...

JSON Syntax:

[
  {
    "ContainerHostname": "string",
    "Image": "string",
    "ImageConfig": {
      "RepositoryAccessMode": "Platform"|"Vpc",
      "RepositoryAuthConfig": {
        "RepositoryCredentialsProviderArn": "string"
      }
    },
    "Mode": "SingleModel"|"MultiModel",
    "ModelDataUrl": "string",
    "Environment": {"string": "string"
      ...},
    "ModelPackageName": "string",
    "InferenceSpecificationName": "string",
    "MultiModelConfig": {
      "ModelCacheSetting": "Enabled"|"Disabled"
    }
  }
  ...
]

--inference-execution-config (structure)

Specifies details of how containers in a multi-container endpoint are called.

Mode -> (string)

How containers in a multi-container are run. The following values are valid.

  • SERIAL - Containers run as a serial pipeline.

  • DIRECT - Only the individual container that you specify is run.

Shorthand Syntax:

Mode=string

JSON Syntax:

{
  "Mode": "Serial"|"Direct"
}

--execution-role-arn (string)

The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see SageMaker Roles .

Note

To be able to pass this role to SageMaker, the caller of this API must have the iam:PassRole permission.

--tags (list)

An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources .

(structure)

A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags .

For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources . For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy .

Key -> (string)

The tag key. Tag keys must be unique per resource.

Value -> (string)

The tag value.

Shorthand Syntax:

Key=string,Value=string ...

JSON Syntax:

[
  {
    "Key": "string",
    "Value": "string"
  }
  ...
]

--vpc-config (structure)

A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .

SecurityGroupIds -> (list)

The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

(string)

Subnets -> (list)

The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .

(string)

Shorthand Syntax:

SecurityGroupIds=string,string,Subnets=string,string

JSON Syntax:

{
  "SecurityGroupIds": ["string", ...],
  "Subnets": ["string", ...]
}

--enable-network-isolation | --no-enable-network-isolation (boolean)

Isolates the model container. No inbound or outbound network calls can be made to or from the model container.

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.

Global Options

--debug (boolean)

Turn on debug logging.

--endpoint-url (string)

Override command’s default URL with the given URL.

--no-verify-ssl (boolean)

By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.

--no-paginate (boolean)

Disable automatic pagination.

--output (string)

The formatting style for command output.

  • json

  • text

  • table

  • yaml

  • yaml-stream

--query (string)

A JMESPath query to use in filtering the response data.

--profile (string)

Use a specific profile from your credential file.

--region (string)

The region to use. Overrides config/env settings.

--version (string)

Display the version of this tool.

--color (string)

Turn on/off color output.

  • on

  • off

  • auto

--no-sign-request (boolean)

Do not sign requests. Credentials will not be loaded if this argument is provided.

--ca-bundle (string)

The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.

--cli-read-timeout (int)

The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.

--cli-connect-timeout (int)

The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.

--cli-binary-format (string)

The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb:// will always be treated as binary and use the file contents directly regardless of the cli-binary-format setting. When using file:// the file contents will need to properly formatted for the configured cli-binary-format.

  • base64

  • raw-in-base64-out

--no-cli-pager (boolean)

Disable cli pager for output.

--cli-auto-prompt (boolean)

Automatically prompt for CLI input parameters.

--no-cli-auto-prompt (boolean)

Disable automatically prompt for CLI input parameters.

Output

ModelArn -> (string)

The ARN of the model created in SageMaker.