Synthesizes UTF-8 input, plain text or SSML, to a stream of bytes. SSML input must be valid, well-formed SSML. Some alphabets might not be available with all the voices (for example, Cyrillic might not be read at all by English voices) unless phoneme mapping is used. For more information, see How it Works .
See also: AWS API Documentation
See ‘aws help’ for descriptions of global parameters.
synthesize-speech
[--engine <value>]
[--language-code <value>]
[--lexicon-names <value>]
--output-format <value>
[--sample-rate <value>]
[--speech-mark-types <value>]
--text <value>
[--text-type <value>]
--voice-id <value>
<outfile>
--engine
(string)
Specifies the engine (
standard
orneural
) for Amazon Polly to use when processing input text for speech synthesis. For information on Amazon Polly voices and which voices are available in standard-only, NTTS-only, and both standard and NTTS formats, see Available Voices .NTTS-only voices
When using NTTS-only voices such as Kevin (en-US), this parameter is required and must be set to
neural
. If the engine is not specified, or is set tostandard
, this will result in an error.Type: String
Valid Values:
standard
|neural
Required: Yes
Standard voices
For standard voices, this is not required; the engine parameter defaults to
standard
. If the engine is not specified, or is set tostandard
and an NTTS-only voice is selected, this will result in an error.Possible values:
standard
neural
--language-code
(string)
Optional language code for the Synthesize Speech request. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the
LanguageCode
parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.Possible values:
arb
cmn-CN
cy-GB
da-DK
de-DE
en-AU
en-GB
en-GB-WLS
en-IN
en-US
es-ES
es-MX
es-US
fr-CA
fr-FR
is-IS
it-IT
ja-JP
hi-IN
ko-KR
nb-NO
nl-NL
pl-PL
pt-BR
pt-PT
ro-RO
ru-RU
sv-SE
tr-TR
--lexicon-names
(list)
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice. For information about storing lexicons, see PutLexicon .
(string)
Syntax:
"string" "string" ...
--output-format
(string)
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
When pcm is used, the content returned is audio/pcm in a signed 16-bit, 1 channel (mono), little-endian format.
Possible values:
json
mp3
ogg_vorbis
pcm
--sample-rate
(string)
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are “8000”, “16000”, “22050”, and “24000”. The default value for standard voices is “22050”. The default value for neural voices is “24000”.
Valid values for pcm are “8000” and “16000” The default value is “16000”.
--speech-mark-types
(list)
The type of speech marks returned for the input text.
(string)
Syntax:
"string" "string" ...
Where valid values are:
sentence
ssml
viseme
word
--text
(string)
Input text to synthesize. If you specify
ssml
as theTextType
, follow the SSML format for the input text.
--text-type
(string)
Specifies whether the input text is plain text or SSML. The default value is plain text. For more information, see Using SSML .
Possible values:
ssml
text
--voice-id
(string)
Voice ID to use for the synthesis. You can get a list of available voice IDs by calling the DescribeVoices operation.
Possible values:
Aditi
Amy
Astrid
Bianca
Brian
Camila
Carla
Carmen
Celine
Chantal
Conchita
Cristiano
Dora
Emma
Enrique
Ewa
Filiz
Geraint
Giorgio
Gwyneth
Hans
Ines
Ivy
Jacek
Jan
Joanna
Joey
Justin
Karl
Kendra
Kevin
Kimberly
Lea
Liv
Lotte
Lucia
Lupe
Mads
Maja
Marlene
Mathieu
Matthew
Maxim
Mia
Miguel
Mizuki
Naja
Nicole
Olivia
Penelope
Raveena
Ricardo
Ruben
Russell
Salli
Seoyeon
Takumi
Tatyana
Vicki
Vitoria
Zeina
Zhiyu
outfile
(string)
Filename where the content will be saved
See ‘aws help’ for descriptions of global parameters.
AudioStream -> (blob)
Stream containing the synthesized speech.
ContentType -> (string)
Specifies the type audio stream. This should reflect the
OutputFormat
parameter in your request.
If you request
mp3
as theOutputFormat
, theContentType
returned is audio/mpeg.If you request
ogg_vorbis
as theOutputFormat
, theContentType
returned is audio/ogg.If you request
pcm
as theOutputFormat
, theContentType
returned is audio/pcm in a signed 16-bit, 1 channel (mono), little-endian format.If you request
json
as theOutputFormat
, theContentType
returned is audio/json.
RequestCharacters -> (integer)
Number of characters synthesized.