[ aws . comprehendmedical ]

describe-snomedct-inference-job

Description

Gets the properties associated with an InferSNOMEDCT job. Use this operation to get the status of an inference job.

See also: AWS API Documentation

See ‘aws help’ for descriptions of global parameters.

Synopsis

  describe-snomedct-inference-job
--job-id <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]

Options

--job-id (string)

The identifier that Amazon Comprehend Medical generated for the job. The StartSNOMEDCTInferenceJob operation returns this identifier in its response.

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See ‘aws help’ for descriptions of global parameters.

Output

ComprehendMedicalAsyncJobProperties -> (structure)

Provides information about a detection job.

JobId -> (string)

The identifier assigned to the detection job.

JobName -> (string)

The name that you assigned to the detection job.

JobStatus -> (string)

The current status of the detection job. If the status is FAILED , the Message field shows the reason for the failure.

Message -> (string)

A description of the status of a job.

SubmitTime -> (timestamp)

The time that the detection job was submitted for processing.

EndTime -> (timestamp)

The time that the detection job completed.

ExpirationTime -> (timestamp)

The date and time that job metadata is deleted from the server. Output files in your S3 bucket will not be deleted. After the metadata is deleted, the job will no longer appear in the results of the ListEntitiesDetectionV2Job or the ListPHIDetectionJobs operation.

InputDataConfig -> (structure)

The input data configuration that you supplied when you created the detection job.

S3Bucket -> (string)

The URI of the S3 bucket that contains the input data. The bucket must be in the same region as the API endpoint that you are calling.

Each file in the document collection must be less than 40 KB. You can store a maximum of 30 GB in the bucket.

S3Key -> (string)

The path to the input data files in the S3 bucket.

OutputDataConfig -> (structure)

The output data configuration that you supplied when you created the detection job.

S3Bucket -> (string)

When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output.

S3Key -> (string)

The path to the output data files in the S3 bucket. Comprehend Medical; creates an output directory using the job ID so that the output from one job does not overwrite the output of another.

LanguageCode -> (string)

The language code of the input documents.

DataAccessRoleArn -> (string)

The Amazon Resource Name (ARN) that gives Comprehend Medical; read access to your input data.

ManifestFilePath -> (string)

The path to the file that describes the results of a batch job.

KMSKey -> (string)

The AWS Key Management Service key, if any, used to encrypt the output files.

ModelVersion -> (string)

The version of the model used to analyze the documents. The version number looks like X.X.X. You can use this information to track the model used for a particular batch of documents.