Gets the text detection results of a Amazon Rekognition Video analysis started by StartTextDetection .
Text detection with Amazon Rekognition Video is an asynchronous operation. You start text detection by calling StartTextDetection which returns a job identifier (
JobId ) When the text detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to
StartTextDetection . To get the results of the text detection operation, first check that the status value published to the Amazon SNS topic is
SUCCEEDED . if so, call
GetTextDetection and pass the job identifier (
JobId ) from the initial call of
GetTextDetectionreturns an array of detected text (
TextDetections) sorted by the time the text was detected, up to 50 words per frame of video.
Each element of the array includes the detected text, the precentage confidence in the acuracy of the detected text, the time the text was detected, bounding box information for where the text was located, and unique identifiers for words and their lines.
Use MaxResults parameter to limit the number of text detections returned. If there are more results than specified in
MaxResults , the value of
NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call
GetTextDetection and populate the
NextToken request parameter with the token value returned from the previous call to
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
get-text-detection --job-id <value> [--max-results <value>] [--next-token <value>] [--cli-input-json | --cli-input-yaml] [--generate-cli-skeleton <value>]
Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to
Maximum number of results to return per paginated call. The largest value you can specify is 1000.
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
Reads arguments from the JSON string provided. The JSON string follows the format provided by
--generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value
input, prints a sample input JSON that can be used as an argument for
--cli-input-json. Similarly, if provided
yaml-input it will print a sample input YAML that can be used with
--cli-input-yaml. If provided with the value
output, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.
JobStatus -> (string)
Current status of the text detection job.
StatusMessage -> (string)
If the job fails,
StatusMessageprovides a descriptive error message.
VideoMetadata -> (structure)
Information about a video that Amazon Rekognition analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.
Codec -> (string)
Type of compression used in the analyzed video.
DurationMillis -> (long)
Length of the video in milliseconds.
Format -> (string)
Format of the analyzed video. Possible values are MP4, MOV and AVI.
FrameRate -> (float)
Number of frames per second in the video.
FrameHeight -> (long)
Vertical pixel dimension of the video.
FrameWidth -> (long)
Horizontal pixel dimension of the video.
ColorRange -> (string)
A description of the range of luminance values in a video, either LIMITED (16 to 235) or FULL (0 to 255).
TextDetections -> (list)
An array of text detected in the video. Each element contains the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
Timestamp -> (long)
The time, in milliseconds from the start of the video, that the text was detected.
TextDetection -> (structure)
Details about text detected in a video.
DetectedText -> (string)
The word or line of text recognized by Amazon Rekognition.
Type -> (string)
The type of text that was detected.
Id -> (integer)
The identifier for the detected text. The identifier is only unique for a single call to
ParentId -> (integer)
The Parent identifier for the detected text identified by the value of
ID. If the type of detected text is
LINE, the value of
Confidence -> (float)
The confidence that Amazon Rekognition has in the accuracy of the detected text and the accuracy of the geometry points around the detected text.
Geometry -> (structure)
The location of the detected text on the image. Includes an axis aligned coarse bounding box surrounding the text and a finer grain polygon for more accurate spatial information.
BoundingBox -> (structure)
An axis-aligned coarse representation of the detected item’s location on the image.
Width -> (float)
Width of the bounding box as a ratio of the overall image width.
Height -> (float)
Height of the bounding box as a ratio of the overall image height.
Left -> (float)
Left coordinate of the bounding box as a ratio of overall image width.
Top -> (float)
Top coordinate of the bounding box as a ratio of overall image height.
Polygon -> (list)
Within the bounding box, a fine-grained polygon around the detected item.
The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
Pointobjects makes up a
Polygonis returned by DetectText and by DetectCustomLabels
Polygonrepresents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.
X -> (float)
The value of the X coordinate for a point on a
Y -> (float)
The value of the Y coordinate for a point on a
NextToken -> (string)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
TextModelVersion -> (string)
Version number of the text detection model that was used to detect text.