[ aws . textract ]

analyze-document

Description

Analyzes an input document for relationships between detected items.

The types of information returned are as follows:

  • Form data (key-value pairs). The related information is returned in two Block objects, each of type KEY_VALUE_SET : a KEY Block object and a VALUE Block object. For example, Name: Ana Silva Carolina contains a key and value. Name: is the key. Ana Silva Carolina is the value.

  • Table and table cell data. A TABLE Block object contains information about a detected table. A CELL Block object is returned for each cell in a table.

  • Lines and words of text. A LINE Block object contains one or more WORD Block objects. All lines and words that are detected in the document are returned (including text that doesn’t have a relationship with the value of FeatureTypes ).

Selection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables. A SELECTION_ELEMENT Block object contains information about a selection element, including the selection status.

You can choose which type of analysis to perform by specifying the FeatureTypes list.

The output is returned in a list of Block objects.

AnalyzeDocument is a synchronous operation. To analyze documents asynchronously, use StartDocumentAnalysis .

For more information, see Document Text Analysis .

See also: AWS API Documentation

See ‘aws help’ for descriptions of global parameters.

Synopsis

  analyze-document
--document <value>
--feature-types <value>
[--human-loop-config <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]

Options

--document (structure)

The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can’t pass image bytes. The document must be an image in JPEG or PNG format.

If you’re using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field.

Bytes -> (blob)

A blob of base64-encoded document bytes. The maximum size of a document that’s provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.

If you’re using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.

S3Object -> (structure)

Identifies an S3 object as the document source. The maximum size of a document that’s stored in an S3 bucket is 5 MB.

Bucket -> (string)

The name of the S3 bucket.

Name -> (string)

The file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF format files.

Version -> (string)

If the bucket has versioning enabled, you can specify the object version.

Shorthand Syntax:

Bytes=blob,S3Object={Bucket=string,Name=string,Version=string}

JSON Syntax:

{
  "Bytes": blob,
  "S3Object": {
    "Bucket": "string",
    "Name": "string",
    "Version": "string"
  }
}

--feature-types (list)

A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to FeatureTypes . All lines and words detected in the document are included in the response (including text that isn’t related to the value of FeatureTypes ).

(string)

Syntax:

"string" "string" ...

Where valid values are:
  TABLES
  FORMS

--human-loop-config (structure)

Sets the configuration for the human in the loop workflow for analyzing documents.

HumanLoopName -> (string)

The name of the human workflow used for this image. This should be kept unique within a region.

FlowDefinitionArn -> (string)

The Amazon Resource Name (ARN) of the flow definition.

DataAttributes -> (structure)

Sets attributes of the input data.

ContentClassifiers -> (list)

Sets whether the input image is free of personally identifiable information or adult content.

(string)

Shorthand Syntax:

HumanLoopName=string,FlowDefinitionArn=string,DataAttributes={ContentClassifiers=[string,string]}

JSON Syntax:

{
  "HumanLoopName": "string",
  "FlowDefinitionArn": "string",
  "DataAttributes": {
    "ContentClassifiers": ["FreeOfPersonallyIdentifiableInformation"|"FreeOfAdultContent", ...]
  }
}

--cli-input-json | --cli-input-yaml (string) Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See ‘aws help’ for descriptions of global parameters.

Examples

To analyze text in a document

The following analyze-document example shows how to analyze text in a document.

aws textract analyze-document --document '{"S3Object":{"Bucket":"bucket","Name":"document"}}' --feature-types '["TABLES","FORMS"]'

Output

{
    "Blocks": [
        {
            "Geometry": {
                "BoundingBox": {
                    "Width": 1.0,
                    "Top": 0.0,
                    "Left": 0.0,
                    "Height": 1.0
                },
                "Polygon": [
                    {
                        "Y": 0.0,
                        "X": 0.0
                    },
                    {
                        "Y": 0.0,
                        "X": 1.0
                    },
                    {
                        "Y": 1.0,
                        "X": 1.0
                    },
                    {
                        "Y": 1.0,
                        "X": 0.0
                    }
                ]
            },
            "Relationships": [
                {
                    "Type": "CHILD",
                    "Ids": [
                        "87586964-d50d-43e2-ace5-8a890657b9a0",
                        "a1e72126-21d9-44f4-a8d6-5c385f9002ba",
                        "e889d012-8a6b-4d2e-b7cd-7a8b327d876a"
                    ]
                }
            ],
            "BlockType": "PAGE",
            "Id": "c2227f12-b25d-4e1f-baea-1ee180d926b2"
        }
    ],
    "DocumentMetadata": {
        "Pages": 1
    }
}

For more information, see Analyzing Document Text with Amazon Textract in the Amazon Textract Developers Guide

Output

DocumentMetadata -> (structure)

Metadata about the analyzed document. An example is the number of pages.

Pages -> (integer)

The number of pages that are detected in the document.

Blocks -> (list)

The items that are detected and analyzed by AnalyzeDocument .

(structure)

A Block represents items that are recognized in a document within a group of pixels close to each other. The information returned in a Block object depends on the type of operation. In text detection for documents (for example DetectDocumentText ), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument ), you can also get information about the fields, tables, and selection elements that are detected in the document.

An array of Block objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText , the array of Block objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis , the array is returned over one or more responses.

For more information, see How Amazon Textract Works .

BlockType -> (string)

The type of text item that’s recognized. In operations for text detection, the following types are returned:

  • PAGE - Contains a list of the LINE Block objects that are detected on a document page.

  • WORD - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren’t separated by spaces.

  • LINE - A string of tab-delimited, contiguous words that are detected on a document page.

In text analysis operations, the following types are returned:

  • PAGE - Contains a list of child Block objects that are detected on a document page.

  • KEY_VALUE_SET - Stores the KEY and VALUE Block objects for linked text that’s detected on a document page. Use the EntityType field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.

  • WORD - A word that’s detected on a document page. A word is one or more ISO basic Latin script characters that aren’t separated by spaces.

  • LINE - A string of tab-delimited, contiguous words that are detected on a document page.

  • TABLE - A table that’s detected on a document page. A table is grid-based information with two or more rows or columns, with a cell span of one row and one column each.

  • CELL - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.

  • SELECTION_ELEMENT - A selection element such as an option button (radio button) or a check box that’s detected on a document page. Use the value of SelectionStatus to determine the status of the selection element.

Confidence -> (float)

The confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.

Text -> (string)

The word or line of text that’s recognized by Amazon Textract.

TextType -> (string)

The kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.

RowIndex -> (integer)

The row in which a table cell is located. The first row position is 1. RowIndex isn’t returned by DetectDocumentText and GetDocumentTextDetection .

ColumnIndex -> (integer)

The column in which a table cell appears. The first column position is 1. ColumnIndex isn’t returned by DetectDocumentText and GetDocumentTextDetection .

RowSpan -> (integer)

The number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn’t returned by DetectDocumentText and GetDocumentTextDetection .

ColumnSpan -> (integer)

The number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1. ColumnSpan isn’t returned by DetectDocumentText and GetDocumentTextDetection .

Geometry -> (structure)

The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.

BoundingBox -> (structure)

An axis-aligned coarse representation of the location of the recognized item on the document page.

Width -> (float)

The width of the bounding box as a ratio of the overall document page width.

Height -> (float)

The height of the bounding box as a ratio of the overall document page height.

Left -> (float)

The left coordinate of the bounding box as a ratio of overall document page width.

Top -> (float)

The top coordinate of the bounding box as a ratio of overall document page height.

Polygon -> (list)

Within the bounding box, a fine-grained polygon around the recognized item.

(structure)

The X and Y coordinates of a point on a document page. The X and Y values that are returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.

An array of Point objects, Polygon , is returned by DetectDocumentText . Polygon represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.

X -> (float)

The value of the X coordinate for a point on a Polygon .

Y -> (float)

The value of the Y coordinate for a point on a Polygon .

Id -> (string)

The identifier for the recognized text. The identifier is only unique for a single operation.

Relationships -> (list)

A list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that’s part of the line of text. There aren’t Relationship objects in the list for relationships that don’t exist, such as when the current block has no child blocks. The list size can be the following:

  • 0 - The block has no child blocks.

  • 1 - The block has child blocks.

(structure)

Information about how blocks are related to each other. A Block object contains 0 or more Relation objects in a list, Relationships . For more information, see Block .

The Type element provides the type of the relationship for all blocks in the IDs array.

Type -> (string)

The type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD . A relationship of type VALUE is a list that contains the ID of the VALUE block that’s associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.

Ids -> (list)

An array of IDs for related blocks. You can get the type of the relationship from the Type element.

(string)

EntityTypes -> (list)

The type of entity. The following can be returned:

  • KEY - An identifier for a field on the document.

  • VALUE - The field text.

EntityTypes isn’t returned by DetectDocumentText and GetDocumentTextDetection .

(string)

SelectionStatus -> (string)

The selection status of a selection element, such as an option button or check box.

Page -> (integer)

The page on which a block was detected. Page is returned by asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF format. A scanned image (JPEG/PNG), even if it contains multiple document pages, is considered to be a single-page document. The value of Page is always 1. Synchronous operations don’t return Page because every input document is considered to be a single-page document.

HumanLoopActivationOutput -> (structure)

Shows the results of the human in the loop evaluation.

HumanLoopArn -> (string)

The Amazon Resource Name (ARN) of the HumanLoop created.

HumanLoopActivationReasons -> (list)

Shows if and why human review was needed.

(string)

HumanLoopActivationConditionsEvaluationResults -> (string)

Shows the result of condition evaluations, including those conditions which activated a human review.

AnalyzeDocumentModelVersion -> (string)

The version of the model used to analyze the document.