1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207
|
// Code generated by smithy-go-codegen DO NOT EDIT.
package polly
import (
"context"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/aws/aws-sdk-go-v2/service/polly/types"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
"io"
)
// Synthesizes UTF-8 input, plain text or SSML, to a stream of bytes. SSML input
// must be valid, well-formed SSML. Some alphabets might not be available with all
// the voices (for example, Cyrillic might not be read at all by English voices)
// unless phoneme mapping is used. For more information, see How it Works
// (https://docs.aws.amazon.com/polly/latest/dg/how-text-to-speech-works.html).
func (c *Client) SynthesizeSpeech(ctx context.Context, params *SynthesizeSpeechInput, optFns ...func(*Options)) (*SynthesizeSpeechOutput, error) {
if params == nil {
params = &SynthesizeSpeechInput{}
}
result, metadata, err := c.invokeOperation(ctx, "SynthesizeSpeech", params, optFns, c.addOperationSynthesizeSpeechMiddlewares)
if err != nil {
return nil, err
}
out := result.(*SynthesizeSpeechOutput)
out.ResultMetadata = metadata
return out, nil
}
type SynthesizeSpeechInput struct {
// The format in which the returned output will be encoded. For audio stream, this
// will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json. When pcm
// is used, the content returned is audio/pcm in a signed 16-bit, 1 channel (mono),
// little-endian format.
//
// This member is required.
OutputFormat types.OutputFormat
// Input text to synthesize. If you specify ssml as the TextType, follow the SSML
// format for the input text.
//
// This member is required.
Text *string
// Voice ID to use for the synthesis. You can get a list of available voice IDs by
// calling the DescribeVoices
// (https://docs.aws.amazon.com/polly/latest/dg/API_DescribeVoices.html) operation.
//
// This member is required.
VoiceId types.VoiceId
// Specifies the engine (standard or neural) for Amazon Polly to use when
// processing input text for speech synthesis. For information on Amazon Polly
// voices and which voices are available in standard-only, NTTS-only, and both
// standard and NTTS formats, see Available Voices
// (https://docs.aws.amazon.com/polly/latest/dg/voicelist.html). NTTS-only voices
// When using NTTS-only voices such as Kevin (en-US), this parameter is required
// and must be set to neural. If the engine is not specified, or is set to
// standard, this will result in an error. Type: String Valid Values: standard |
// neural Required: Yes Standard voices For standard voices, this is not required;
// the engine parameter defaults to standard. If the engine is not specified, or is
// set to standard and an NTTS-only voice is selected, this will result in an
// error.
Engine types.Engine
// Optional language code for the Synthesize Speech request. This is only necessary
// if using a bilingual voice, such as Aditi, which can be used for either Indian
// English (en-IN) or Hindi (hi-IN). If a bilingual voice is used and no language
// code is specified, Amazon Polly uses the default language of the bilingual
// voice. The default language for any voice is the one returned by the
// DescribeVoices
// (https://docs.aws.amazon.com/polly/latest/dg/API_DescribeVoices.html) operation
// for the LanguageCode parameter. For example, if no language code is specified,
// Aditi will use Indian English rather than Hindi.
LanguageCode types.LanguageCode
// List of one or more pronunciation lexicon names you want the service to apply
// during synthesis. Lexicons are applied only if the language of the lexicon is
// the same as the language of the voice. For information about storing lexicons,
// see PutLexicon
// (https://docs.aws.amazon.com/polly/latest/dg/API_PutLexicon.html).
LexiconNames []string
// The audio frequency specified in Hz. The valid values for mp3 and ogg_vorbis are
// "8000", "16000", "22050", and "24000". The default value for standard voices is
// "22050". The default value for neural voices is "24000". Valid values for pcm
// are "8000" and "16000" The default value is "16000".
SampleRate *string
// The type of speech marks returned for the input text.
SpeechMarkTypes []types.SpeechMarkType
// Specifies whether the input text is plain text or SSML. The default value is
// plain text. For more information, see Using SSML
// (https://docs.aws.amazon.com/polly/latest/dg/ssml.html).
TextType types.TextType
noSmithyDocumentSerde
}
type SynthesizeSpeechOutput struct {
// Stream containing the synthesized speech.
//
// This member is required.
AudioStream io.ReadCloser
// Specifies the type audio stream. This should reflect the OutputFormat parameter
// in your request.
//
// * If you request mp3 as the OutputFormat, the ContentType
// returned is audio/mpeg.
//
// * If you request ogg_vorbis as the OutputFormat, the
// ContentType returned is audio/ogg.
//
// * If you request pcm as the OutputFormat,
// the ContentType returned is audio/pcm in a signed 16-bit, 1 channel (mono),
// little-endian format.
//
// * If you request json as the OutputFormat, the
// ContentType returned is application/x-json-stream.
ContentType *string
// Number of characters synthesized.
RequestCharacters int32
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
noSmithyDocumentSerde
}
func (c *Client) addOperationSynthesizeSpeechMiddlewares(stack *middleware.Stack, options Options) (err error) {
err = stack.Serialize.Add(&awsRestjson1_serializeOpSynthesizeSpeech{}, middleware.After)
if err != nil {
return err
}
err = stack.Deserialize.Add(&awsRestjson1_deserializeOpSynthesizeSpeech{}, middleware.After)
if err != nil {
return err
}
if err = addSetLoggerMiddleware(stack, options); err != nil {
return err
}
if err = awsmiddleware.AddClientRequestIDMiddleware(stack); err != nil {
return err
}
if err = smithyhttp.AddComputeContentLengthMiddleware(stack); err != nil {
return err
}
if err = addResolveEndpointMiddleware(stack, options); err != nil {
return err
}
if err = v4.AddComputePayloadSHA256Middleware(stack); err != nil {
return err
}
if err = addRetryMiddlewares(stack, options); err != nil {
return err
}
if err = addHTTPSignerV4Middleware(stack, options); err != nil {
return err
}
if err = awsmiddleware.AddRawResponseToMetadata(stack); err != nil {
return err
}
if err = awsmiddleware.AddRecordResponseTiming(stack); err != nil {
return err
}
if err = addClientUserAgent(stack); err != nil {
return err
}
if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = addOpSynthesizeSpeechValidationMiddleware(stack); err != nil {
return err
}
if err = stack.Initialize.Add(newServiceMetadataMiddleware_opSynthesizeSpeech(options.Region), middleware.Before); err != nil {
return err
}
if err = addRequestIDRetrieverMiddleware(stack); err != nil {
return err
}
if err = addResponseErrorMiddleware(stack); err != nil {
return err
}
if err = addRequestResponseLogging(stack, options); err != nil {
return err
}
return nil
}
func newServiceMetadataMiddleware_opSynthesizeSpeech(region string) *awsmiddleware.RegisterServiceMetadata {
return &awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
SigningName: "polly",
OperationName: "SynthesizeSpeech",
}
}
|