[ aws . transcribe ]
Starts an asynchronous analytics job that not only transcribes the audio recording of a caller and agent, but also returns additional insights. These insights include how quickly or loudly the caller or agent was speaking. To retrieve additional insights with your analytics jobs, create categories. A category is a way to classify analytics jobs based on attributes, such as a customer’s sentiment or a particular phrase being used during the call. For more information, see the operation.
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
start-call-analytics-job
--call-analytics-job-name <value>
--media <value>
[--output-location <value>]
[--output-encryption-kms-key-id <value>]
--data-access-role-arn <value>
[--settings <value>]
[--channel-definitions <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
--call-analytics-job-name
(string)
The name of the call analytics job. You can’t use the string “.” or “..” by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a call analytics job with the same name as a previous call analytics job, you get a
ConflictException
error.
--media
(structure)
Describes the input media file in a transcription request.
MediaFileUri -> (string)
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
s3://<AWSDOC-EXAMPLE-BUCKET>/<keyprefix>/<objectkey>
For example:
s3://AWSDOC-EXAMPLE-BUCKET/example.mp4
s3://AWSDOC-EXAMPLE-BUCKET/mediadocs/example.mp4
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide .
RedactedMediaFileUri -> (string)
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Shorthand Syntax:
MediaFileUri=string,RedactedMediaFileUri=string
JSON Syntax:
{
"MediaFileUri": "string",
"RedactedMediaFileUri": "string"
}
--output-location
(string)
The Amazon S3 location where the output of the call analytics job is stored. You can provide the following location types to store the output of call analytics job:
s3://DOC-EXAMPLE-BUCKET1 If you specify a bucket, Amazon Transcribe saves the output of the analytics job as a JSON file at the root level of the bucket.
s3://DOC-EXAMPLE-BUCKET1/folder/ f you specify a path, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLE-BUCKET1/folder/your-transcription-job-name.json. If you specify a folder, you must provide a trailing slash.
s3://DOC-EXAMPLE-BUCKET1/folder/filename.json. If you provide a path that has the filename specified, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLEBUCKET1/folder/filename.json.
You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of our analytics job using the
OutputEncryptionKMSKeyId
parameter. If you don’t specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of the analytics job output that is placed in your S3 bucket.
--output-encryption-kms-key-id
(string)
The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service key used to encrypt the output of the call analytics job. The user calling the operation must have permission to use the specified KMS key.
You use either of the following to identify an Amazon Web Services KMS key in the current account:
KMS Key ID: “1234abcd-12ab-34cd-56ef-1234567890ab”
KMS Key Alias: “alias/ExampleAlias”
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS key in the current account or another account: “arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef1234567890ab”
ARN of a KMS Key Alias: “arn:aws:kms:region:accountID:alias/ExampleAlias”
If you don’t specify an encryption key, the output of the call analytics job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the
OutputLocation
parameter.
--data-access-role-arn
(string)
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains your input files. Amazon Transcribe assumes this role to read queued audio files. If you have specified an output S3 bucket for your transcription results, this role should have access to the output bucket as well.
--settings
(structure)
A
Settings
object that provides optional settings for a call analytics job.VocabularyName -> (string)
The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName -> (string)
The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod -> (string)
Set to mask to remove filtered text from the transcript and replace it with three asterisks (“***”) as placeholder text. Set to
remove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed.LanguageModelName -> (string)
The structure used to describe a custom language model.
ContentRedaction -> (structure)
Settings for content redaction within a transcription job.
RedactionType -> (string)
Request parameter that defines the entities to be redacted. The only accepted value is
PII
.RedactionOutput -> (string)
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts.PiiEntityTypes -> (list)
The types of personally identifiable information (PII) you want to redact in your transcript.
(string)
LanguageOptions -> (list)
When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don’t know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages for additional information.
(string)
LanguageIdSettings -> (map)
The language identification settings associated with your call analytics job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.key -> (string)
value -> (structure)
Language-specific settings that can be specified when language identification is enabled.
VocabularyName -> (string)
The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language codes as the transcription job; if the languages don’t match, the vocabulary isn’t applied.
VocabularyFilterName -> (string)
The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language codes as the transcription job; if the languages don’t match, the vocabulary filter isn’t be applied.
LanguageModelName -> (string)
The name of the language model you want to use when transcribing your audio. The model you specify must have the same language codes as the transcription job; if the languages don’t match, the language model isn’t be applied.
Shorthand Syntax:
VocabularyName=string,VocabularyFilterName=string,VocabularyFilterMethod=string,LanguageModelName=string,ContentRedaction={RedactionType=string,RedactionOutput=string,PiiEntityTypes=[string,string]},LanguageOptions=string,string,LanguageIdSettings={KeyName1={VocabularyName=string,VocabularyFilterName=string,LanguageModelName=string},KeyName2={VocabularyName=string,VocabularyFilterName=string,LanguageModelName=string}}
JSON Syntax:
{
"VocabularyName": "string",
"VocabularyFilterName": "string",
"VocabularyFilterMethod": "remove"|"mask"|"tag",
"LanguageModelName": "string",
"ContentRedaction": {
"RedactionType": "PII",
"RedactionOutput": "redacted"|"redacted_and_unredacted",
"PiiEntityTypes": ["BANK_ACCOUNT_NUMBER"|"BANK_ROUTING"|"CREDIT_DEBIT_NUMBER"|"CREDIT_DEBIT_CVV"|"CREDIT_DEBIT_EXPIRY"|"PIN"|"EMAIL"|"ADDRESS"|"NAME"|"PHONE"|"SSN"|"ALL", ...]
},
"LanguageOptions": ["af-ZA"|"ar-AE"|"ar-SA"|"cy-GB"|"da-DK"|"de-CH"|"de-DE"|"en-AB"|"en-AU"|"en-GB"|"en-IE"|"en-IN"|"en-US"|"en-WL"|"es-ES"|"es-US"|"fa-IR"|"fr-CA"|"fr-FR"|"ga-IE"|"gd-GB"|"he-IL"|"hi-IN"|"id-ID"|"it-IT"|"ja-JP"|"ko-KR"|"ms-MY"|"nl-NL"|"pt-BR"|"pt-PT"|"ru-RU"|"ta-IN"|"te-IN"|"tr-TR"|"zh-CN"|"zh-TW"|"th-TH"|"en-ZA"|"en-NZ", ...],
"LanguageIdSettings": {"af-ZA"|"ar-AE"|"ar-SA"|"cy-GB"|"da-DK"|"de-CH"|"de-DE"|"en-AB"|"en-AU"|"en-GB"|"en-IE"|"en-IN"|"en-US"|"en-WL"|"es-ES"|"es-US"|"fa-IR"|"fr-CA"|"fr-FR"|"ga-IE"|"gd-GB"|"he-IL"|"hi-IN"|"id-ID"|"it-IT"|"ja-JP"|"ko-KR"|"ms-MY"|"nl-NL"|"pt-BR"|"pt-PT"|"ru-RU"|"ta-IN"|"te-IN"|"tr-TR"|"zh-CN"|"zh-TW"|"th-TH"|"en-ZA"|"en-NZ": {
"VocabularyName": "string",
"VocabularyFilterName": "string",
"LanguageModelName": "string"
}
...}
}
--channel-definitions
(list)
When you start a call analytics job, you must pass an array that maps the agent and the customer to specific audio channels. The values you can assign to a channel are 0 and 1. The agent and the customer must each have their own channel. You can’t assign more than one channel to an agent or customer.
(structure)
For a call analytics job, an object that indicates the audio channel that belongs to the agent and the audio channel that belongs to the customer.
ChannelId -> (integer)
A value that indicates the audio channel.
ParticipantRole -> (string)
Indicates whether the person speaking on the audio channel is the agent or customer.
Shorthand Syntax:
ChannelId=integer,ParticipantRole=string ...
JSON Syntax:
[
{
"ChannelId": integer,
"ParticipantRole": "AGENT"|"CUSTOMER"
}
...
]
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command.
See ‘aws help’ for descriptions of global parameters.
CallAnalyticsJob -> (structure)
An object containing the details of the asynchronous call analytics job.
CallAnalyticsJobName -> (string)
The name of the call analytics job.
CallAnalyticsJobStatus -> (string)
The status of the analytics job.
LanguageCode -> (string)
If you know the language spoken between the customer and the agent, specify a language code for this field.
If you don’t know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio. Refer to Supported languages for additional information.
MediaSampleRateHertz -> (integer)
The sample rate, in Hertz, of the audio.
MediaFormat -> (string)
The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.
Media -> (structure)
Describes the input media file in a transcription request.
MediaFileUri -> (string)
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
s3://<AWSDOC-EXAMPLE-BUCKET>/<keyprefix>/<objectkey>
For example:
s3://AWSDOC-EXAMPLE-BUCKET/example.mp4
s3://AWSDOC-EXAMPLE-BUCKET/mediadocs/example.mp4
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide .
RedactedMediaFileUri -> (string)
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript -> (structure)
Identifies the location of a transcription.
TranscriptFileUri -> (string)
The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.RedactedTranscriptFileUri -> (string)
The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.StartTime -> (timestamp)
A timestamp that shows when the analytics job started processing.
CreationTime -> (timestamp)
A timestamp that shows when the analytics job was created.
CompletionTime -> (timestamp)
A timestamp that shows when the analytics job was completed.
FailureReason -> (string)
If the
AnalyticsJobStatus
isFAILED
, this field contains information about why the job failed.The
FailureReason
field can contain one of the following values:
Unsupported media format
: The media format specified in theMediaFormat
field of the request isn’t valid. See the description of theMediaFormat
field for a list of valid values.
The media format provided does not match the detected media format
: The media format of the audio file doesn’t match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file
: The sample rate specified in theMediaSampleRateHertz
of the request isn’t valid. The sample rate must be between 8,000 and 48,000 Hertz.
The sample rate provided does not match the detected sample rate
: The sample rate in the audio file doesn’t match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large
: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide.
Invalid number of channels: number of channels too large
: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference .DataAccessRoleArn -> (string)
The Amazon Resource Number (ARN) that you use to access the analytics job. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.IdentifiedLanguageScore -> (float)
A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don’t provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified.
Settings -> (structure)
Provides information about the settings used to run a transcription job.
VocabularyName -> (string)
The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName -> (string)
The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod -> (string)
Set to mask to remove filtered text from the transcript and replace it with three asterisks (“***”) as placeholder text. Set to
remove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed.LanguageModelName -> (string)
The structure used to describe a custom language model.
ContentRedaction -> (structure)
Settings for content redaction within a transcription job.
RedactionType -> (string)
Request parameter that defines the entities to be redacted. The only accepted value is
PII
.RedactionOutput -> (string)
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts.PiiEntityTypes -> (list)
The types of personally identifiable information (PII) you want to redact in your transcript.
(string)
LanguageOptions -> (list)
When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don’t know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages for additional information.
(string)
LanguageIdSettings -> (map)
The language identification settings associated with your call analytics job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.key -> (string)
value -> (structure)
Language-specific settings that can be specified when language identification is enabled.
VocabularyName -> (string)
The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language codes as the transcription job; if the languages don’t match, the vocabulary isn’t applied.
VocabularyFilterName -> (string)
The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language codes as the transcription job; if the languages don’t match, the vocabulary filter isn’t be applied.
LanguageModelName -> (string)
The name of the language model you want to use when transcribing your audio. The model you specify must have the same language codes as the transcription job; if the languages don’t match, the language model isn’t be applied.
ChannelDefinitions -> (list)
Shows numeric values to indicate the channel assigned to the agent’s audio and the channel assigned to the customer’s audio.
(structure)
For a call analytics job, an object that indicates the audio channel that belongs to the agent and the audio channel that belongs to the customer.
ChannelId -> (integer)
A value that indicates the audio channel.
ParticipantRole -> (string)
Indicates whether the person speaking on the audio channel is the agent or customer.