1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215
|
// Code generated by smithy-go-codegen DO NOT EDIT.
package bedrockruntime
import (
"context"
"fmt"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/service/bedrockruntime/types"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
// Invokes the specified Amazon Bedrock model to run inference using the prompt
// and inference parameters provided in the request body. You use model inference
// to generate text, images, and embeddings.
//
// For example code, see Invoke model code examples in the Amazon Bedrock User
// Guide.
//
// This operation requires permission for the bedrock:InvokeModel action.
func (c *Client) InvokeModel(ctx context.Context, params *InvokeModelInput, optFns ...func(*Options)) (*InvokeModelOutput, error) {
if params == nil {
params = &InvokeModelInput{}
}
result, metadata, err := c.invokeOperation(ctx, "InvokeModel", params, optFns, c.addOperationInvokeModelMiddlewares)
if err != nil {
return nil, err
}
out := result.(*InvokeModelOutput)
out.ResultMetadata = metadata
return out, nil
}
type InvokeModelInput struct {
// The prompt and inference parameters in the format specified in the contentType
// in the header. You must provide the body in JSON format. To see the format and
// content of the request and response bodies for different models, refer to [Inference parameters]. For
// more information, see [Run inference]in the Bedrock User Guide.
//
// [Inference parameters]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
// [Run inference]: https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html
//
// This member is required.
Body []byte
// The unique identifier of the model to invoke to run inference.
//
// The modelId to provide depends on the type of model that you use:
//
// - If you use a base model, specify the model ID or its ARN. For a list of
// model IDs for base models, see [Amazon Bedrock base model IDs (on-demand throughput)]in the Amazon Bedrock User Guide.
//
// - If you use a provisioned model, specify the ARN of the Provisioned
// Throughput. For more information, see [Run inference using a Provisioned Throughput]in the Amazon Bedrock User Guide.
//
// - If you use a custom model, first purchase Provisioned Throughput for it.
// Then specify the ARN of the resulting provisioned model. For more information,
// see [Use a custom model in Amazon Bedrock]in the Amazon Bedrock User Guide.
//
// [Run inference using a Provisioned Throughput]: https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html
// [Use a custom model in Amazon Bedrock]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html
// [Amazon Bedrock base model IDs (on-demand throughput)]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns
//
// This member is required.
ModelId *string
// The desired MIME type of the inference body in the response. The default value
// is application/json .
Accept *string
// The MIME type of the input data in the request. You must specify
// application/json .
ContentType *string
// The unique identifier of the guardrail that you want to use. If you don't
// provide a value, no guardrail is applied to the invocation.
//
// An error will be thrown in the following situations.
//
// - You don't provide a guardrail identifier but you specify the
// amazon-bedrock-guardrailConfig field in the request body.
//
// - You enable the guardrail but the contentType isn't application/json .
//
// - You provide a guardrail identifier, but guardrailVersion isn't specified.
GuardrailIdentifier *string
// The version number for the guardrail. The value can also be DRAFT .
GuardrailVersion *string
// Specifies whether to enable or disable the Bedrock trace. If enabled, you can
// see the full Bedrock trace.
Trace types.Trace
noSmithyDocumentSerde
}
type InvokeModelOutput struct {
// Inference response from the model in the format specified in the contentType
// header. To see the format and content of the request and response bodies for
// different models, refer to [Inference parameters].
//
// [Inference parameters]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
//
// This member is required.
Body []byte
// The MIME type of the inference result.
//
// This member is required.
ContentType *string
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
noSmithyDocumentSerde
}
func (c *Client) addOperationInvokeModelMiddlewares(stack *middleware.Stack, options Options) (err error) {
if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
return err
}
err = stack.Serialize.Add(&awsRestjson1_serializeOpInvokeModel{}, middleware.After)
if err != nil {
return err
}
err = stack.Deserialize.Add(&awsRestjson1_deserializeOpInvokeModel{}, middleware.After)
if err != nil {
return err
}
if err := addProtocolFinalizerMiddlewares(stack, options, "InvokeModel"); err != nil {
return fmt.Errorf("add protocol finalizers: %v", err)
}
if err = addlegacyEndpointContextSetter(stack, options); err != nil {
return err
}
if err = addSetLoggerMiddleware(stack, options); err != nil {
return err
}
if err = addClientRequestID(stack); err != nil {
return err
}
if err = addComputeContentLength(stack); err != nil {
return err
}
if err = addResolveEndpointMiddleware(stack, options); err != nil {
return err
}
if err = addComputePayloadSHA256(stack); err != nil {
return err
}
if err = addRetry(stack, options); err != nil {
return err
}
if err = addRawResponseToMetadata(stack); err != nil {
return err
}
if err = addRecordResponseTiming(stack); err != nil {
return err
}
if err = addClientUserAgent(stack, options); err != nil {
return err
}
if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
return err
}
if err = addTimeOffsetBuild(stack, c); err != nil {
return err
}
if err = addUserAgentRetryMode(stack, options); err != nil {
return err
}
if err = addOpInvokeModelValidationMiddleware(stack); err != nil {
return err
}
if err = stack.Initialize.Add(newServiceMetadataMiddleware_opInvokeModel(options.Region), middleware.Before); err != nil {
return err
}
if err = addRecursionDetection(stack); err != nil {
return err
}
if err = addRequestIDRetrieverMiddleware(stack); err != nil {
return err
}
if err = addResponseErrorMiddleware(stack); err != nil {
return err
}
if err = addRequestResponseLogging(stack, options); err != nil {
return err
}
if err = addDisableHTTPSMiddleware(stack, options); err != nil {
return err
}
return nil
}
func newServiceMetadataMiddleware_opInvokeModel(region string) *awsmiddleware.RegisterServiceMetadata {
return &awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
OperationName: "InvokeModel",
}
}
|