[ aws . machinelearning ]

create-realtime-endpoint

Description

Creates a real-time endpoint for the MLModel . The endpoint contains the URI of the MLModel ; that is, the location to send real-time prediction requests for the specified MLModel .

See also: AWS API Documentation

See ‘aws help’ for descriptions of global parameters.

Synopsis

  create-realtime-endpoint
--ml-model-id <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]

Options

--ml-model-id (string)

The ID assigned to the MLModel during creation.

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See ‘aws help’ for descriptions of global parameters.

Output

MLModelId -> (string)

A user-supplied ID that uniquely identifies the MLModel . This value should be identical to the value of the MLModelId in the request.

RealtimeEndpointInfo -> (structure)

The endpoint information of the MLModel

PeakRequestsPerSecond -> (integer)

The maximum processing rate for the real-time endpoint for MLModel , measured in incoming requests per second.

CreatedAt -> (timestamp)

The time that the request to create the real-time endpoint for the MLModel was received. The time is expressed in epoch time.

EndpointUrl -> (string)

The URI that specifies where to send real-time prediction requests for the MLModel .

Note

Note

The application must wait until the real-time endpoint is ready before using this URI.

EndpointStatus -> (string)

The current status of the real-time endpoint for the MLModel . This element can have one of the following values:

  • NONE - Endpoint does not exist or was previously deleted.

  • READY - Endpoint is ready to be used for real-time predictions.

  • UPDATING - Updating/creating the endpoint.