[ aws . rekognition ]
Gets face detection results for a Amazon Rekognition Video analysis started by StartFaceDetection .
Face detection with Amazon Rekognition Video is an asynchronous operation. You start face detection by calling StartFaceDetection which returns a job identifier (JobId
). When the face detection operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartFaceDetection
. To get the results of the face detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED
. If so, call GetFaceDetection and pass the job identifier (JobId
) from the initial call to StartFaceDetection
.
GetFaceDetection
returns an array of detected faces (Faces
) sorted by the time the faces were detected.
Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults
, the value of NextToken
in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetFaceDetection
and populate the NextToken
request parameter with the token value returned from the previous call to GetFaceDetection
.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
get-face-detection
--job-id <value>
[--max-results <value>]
[--next-token <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--job-id
(string)
Unique identifier for the face detection job. The
JobId
is returned fromStartFaceDetection
.
--max-results
(integer)
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
--next-token
(string)
If the previous response was incomplete (because there are more faces to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.
To get the results of a face detection operation
The following get-face-detection
command displays the results of a face detection operation that you started previously by calling start-face-detection
.
aws rekognition get-face-detection \
--job-id 1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Output:
{
"Faces": [
{
"Timestamp": 467,
"Face": {
"BoundingBox": {
"Width": 0.1560753583908081,
"Top": 0.13555361330509186,
"Left": -0.0952017530798912,
"Height": 0.6934483051300049
},
"Landmarks": [
{
"Y": 0.4013825058937073,
"X": -0.041750285774469376,
"Type": "eyeLeft"
},
{
"Y": 0.41695496439933777,
"X": 0.027979329228401184,
"Type": "eyeRight"
},
{
"Y": 0.6375303268432617,
"X": -0.04034662991762161,
"Type": "mouthLeft"
},
{
"Y": 0.6497718691825867,
"X": 0.013960429467260838,
"Type": "mouthRight"
},
{
"Y": 0.5238034129142761,
"X": 0.008022055961191654,
"Type": "nose"
}
],
"Pose": {
"Yaw": -58.07863998413086,
"Roll": 1.9384294748306274,
"Pitch": -24.66305160522461
},
"Quality": {
"Sharpness": 83.14741516113281,
"Brightness": 25.75942611694336
},
"Confidence": 87.7622299194336
}
},
{
"Timestamp": 967,
"Face": {
"BoundingBox": {
"Width": 0.28559377789497375,
"Top": 0.19436298310756683,
"Left": 0.024553587660193443,
"Height": 0.7216082215309143
},
"Landmarks": [
{
"Y": 0.4650231599807739,
"X": 0.16269078850746155,
"Type": "eyeLeft"
},
{
"Y": 0.4843238294124603,
"X": 0.2782580852508545,
"Type": "eyeRight"
},
{
"Y": 0.71530681848526,
"X": 0.1741468608379364,
"Type": "mouthLeft"
},
{
"Y": 0.7310671210289001,
"X": 0.26857468485832214,
"Type": "mouthRight"
},
{
"Y": 0.582602322101593,
"X": 0.2566150426864624,
"Type": "nose"
}
],
"Pose": {
"Yaw": 11.487052917480469,
"Roll": 5.074230670928955,
"Pitch": 15.396159172058105
},
"Quality": {
"Sharpness": 73.32209777832031,
"Brightness": 54.96497344970703
},
"Confidence": 99.99998474121094
}
}
],
"NextToken": "OzL223pDKy9116O/02KXRqFIEAwxjy4PkgYcm3hSo0rdysbXg5Ex0eFgTGEj0ADEac6S037U",
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
"Format": "QuickTime / MOV",
"FrameRate": 29.970617294311523,
"Codec": "h264",
"DurationMillis": 6806,
"FrameHeight": 1080,
"FrameWidth": 1920
}
}
For more information, see Detecting Faces in a Stored Video in the Amazon Rekognition Developer Guide.
JobStatus -> (string)
The current status of the face detection job.
StatusMessage -> (string)
If the job fails,
StatusMessage
provides a descriptive error message.
VideoMetadata -> (structure)
Information about a video that Amazon Rekognition Video analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.Codec -> (string)
Type of compression used in the analyzed video.
DurationMillis -> (long)
Length of the video in milliseconds.
Format -> (string)
Format of the analyzed video. Possible values are MP4, MOV and AVI.
FrameRate -> (float)
Number of frames per second in the video.
FrameHeight -> (long)
Vertical pixel dimension of the video.
FrameWidth -> (long)
Horizontal pixel dimension of the video.
ColorRange -> (string)
A description of the range of luminance values in a video, either LIMITED (16 to 235) or FULL (0 to 255).
NextToken -> (string)
If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.
Faces -> (list)
An array of faces detected in the video. Each element contains a detected face’s details and the time, in milliseconds from the start of the video, the face was detected.
(structure)
Information about a face detected in a video analysis request and the time the face was detected in the video.
Timestamp -> (long)
Time, in milliseconds from the start of the video, that the face was detected.
Face -> (structure)
The face properties for the detected face.
BoundingBox -> (structure)
Bounding box of the face. Default attribute.
Width -> (float)
Width of the bounding box as a ratio of the overall image width.
Height -> (float)
Height of the bounding box as a ratio of the overall image height.
Left -> (float)
Left coordinate of the bounding box as a ratio of overall image width.
Top -> (float)
Top coordinate of the bounding box as a ratio of overall image height.
AgeRange -> (structure)
The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.
Low -> (integer)
The lowest estimated age.
High -> (integer)
The highest estimated age.
Smile -> (structure)
Indicates whether or not the face is smiling, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the face is smiling or not.
Confidence -> (float)
Level of confidence in the determination.
Eyeglasses -> (structure)
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the face is wearing eye glasses or not.
Confidence -> (float)
Level of confidence in the determination.
Sunglasses -> (structure)
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the face is wearing sunglasses or not.
Confidence -> (float)
Level of confidence in the determination.
Gender -> (structure)
The predicted gender of a detected face.
Value -> (string)
The predicted gender of the face.
Confidence -> (float)
Level of confidence in the prediction.
Beard -> (structure)
Indicates whether or not the face has a beard, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the face has beard or not.
Confidence -> (float)
Level of confidence in the determination.
Mustache -> (structure)
Indicates whether or not the face has a mustache, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the face has mustache or not.
Confidence -> (float)
Level of confidence in the determination.
EyesOpen -> (structure)
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the eyes on the face are open.
Confidence -> (float)
Level of confidence in the determination.
MouthOpen -> (structure)
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
Value -> (boolean)
Boolean value that indicates whether the mouth on the face is open or not.
Confidence -> (float)
Level of confidence in the determination.
Emotions -> (list)
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
(structure)
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
Type -> (string)
Type of emotion detected.
Confidence -> (float)
Level of confidence in the determination.
Landmarks -> (list)
Indicates the location of landmarks on the face. Default attribute.
(structure)
Indicates the location of the landmark on the face.
Type -> (string)
Type of landmark.
X -> (float)
The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Y -> (float)
The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.
Pose -> (structure)
Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.
Roll -> (float)
Value representing the face rotation on the roll axis.
Yaw -> (float)
Value representing the face rotation on the yaw axis.
Pitch -> (float)
Value representing the face rotation on the pitch axis.
Quality -> (structure)
Identifies image brightness and sharpness. Default attribute.
Brightness -> (float)
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Sharpness -> (float)
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
Confidence -> (float)
Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.