1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191
|
// Code generated by smithy-go-codegen DO NOT EDIT.
package neptunedata
import (
"context"
"fmt"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
// Creates a new Neptune ML inference endpoint that lets you query one specific
// model that the model-training process constructed. See [Managing inference endpoints using the endpoints command].
//
// When invoking this operation in a Neptune cluster that has IAM authentication
// enabled, the IAM user or role making the request must have a policy attached
// that allows the [neptune-db:CreateMLEndpoint]IAM action in that cluster.
//
// [Managing inference endpoints using the endpoints command]: https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning-api-endpoints.html
// [neptune-db:CreateMLEndpoint]: https://docs.aws.amazon.com/neptune/latest/userguide/iam-dp-actions.html#createmlendpoint
func (c *Client) CreateMLEndpoint(ctx context.Context, params *CreateMLEndpointInput, optFns ...func(*Options)) (*CreateMLEndpointOutput, error) {
if params == nil {
params = &CreateMLEndpointInput{}
}
result, metadata, err := c.invokeOperation(ctx, "CreateMLEndpoint", params, optFns, c.addOperationCreateMLEndpointMiddlewares)
if err != nil {
return nil, err
}
out := result.(*CreateMLEndpointOutput)
out.ResultMetadata = metadata
return out, nil
}
type CreateMLEndpointInput struct {
// A unique identifier for the new inference endpoint. The default is an
// autogenerated timestamped name.
Id *string
// The minimum number of Amazon EC2 instances to deploy to an endpoint for
// prediction. The default is 1
InstanceCount *int32
// The type of Neptune ML instance to use for online servicing. The default is
// ml.m5.xlarge . Choosing the ML instance for an inference endpoint depends on the
// task type, the graph size, and your budget.
InstanceType *string
// The job Id of the completed model-training job that has created the model that
// the inference endpoint will point to. You must supply either the
// mlModelTrainingJobId or the mlModelTransformJobId .
MlModelTrainingJobId *string
// The job Id of the completed model-transform job. You must supply either the
// mlModelTrainingJobId or the mlModelTransformJobId .
MlModelTransformJobId *string
// Model type for training. By default the Neptune ML model is automatically based
// on the modelType used in data processing, but you can specify a different model
// type here. The default is rgcn for heterogeneous graphs and kge for knowledge
// graphs. The only valid value for heterogeneous graphs is rgcn . Valid values for
// knowledge graphs are: kge , transe , distmult , and rotate .
ModelName *string
// The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3
// resources. This must be listed in your DB cluster parameter group or an error
// will be thrown.
NeptuneIamRoleArn *string
// If set to true , update indicates that this is an update request. The default
// is false . You must supply either the mlModelTrainingJobId or the
// mlModelTransformJobId .
Update *bool
// The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to
// encrypt data on the storage volume attached to the ML compute instances that run
// the training job. The default is None.
VolumeEncryptionKMSKey *string
noSmithyDocumentSerde
}
type CreateMLEndpointOutput struct {
// The ARN for the new inference endpoint.
Arn *string
// The endpoint creation time, in milliseconds.
CreationTimeInMillis *int64
// The unique ID of the new inference endpoint.
Id *string
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
noSmithyDocumentSerde
}
func (c *Client) addOperationCreateMLEndpointMiddlewares(stack *middleware.Stack, options Options) (err error) {
if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
return err
}
err = stack.Serialize.Add(&awsRestjson1_serializeOpCreateMLEndpoint{}, middleware.After)
if err != nil {
return err
}
err = stack.Deserialize.Add(&awsRestjson1_deserializeOpCreateMLEndpoint{}, middleware.After)
if err != nil {
return err
}
if err := addProtocolFinalizerMiddlewares(stack, options, "CreateMLEndpoint"); err != nil {
return fmt.Errorf("add protocol finalizers: %v", err)
}
if err = addlegacyEndpointContextSetter(stack, options); err != nil {
return err
}
if err = addSetLoggerMiddleware(stack, options); err != nil {
return err
}
if err = addClientRequestID(stack); err != nil {
return err
}
if err = addComputeContentLength(stack); err != nil {
return err
}
if err = addResolveEndpointMiddleware(stack, options); err != nil {
return err
}
if err = addComputePayloadSHA256(stack); err != nil {
return err
}
if err = addRetry(stack, options); err != nil {
return err
}
if err = addRawResponseToMetadata(stack); err != nil {
return err
}
if err = addRecordResponseTiming(stack); err != nil {
return err
}
if err = addClientUserAgent(stack, options); err != nil {
return err
}
if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
return err
}
if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
return err
}
if err = addTimeOffsetBuild(stack, c); err != nil {
return err
}
if err = addUserAgentRetryMode(stack, options); err != nil {
return err
}
if err = stack.Initialize.Add(newServiceMetadataMiddleware_opCreateMLEndpoint(options.Region), middleware.Before); err != nil {
return err
}
if err = addRecursionDetection(stack); err != nil {
return err
}
if err = addRequestIDRetrieverMiddleware(stack); err != nil {
return err
}
if err = addResponseErrorMiddleware(stack); err != nil {
return err
}
if err = addRequestResponseLogging(stack, options); err != nil {
return err
}
if err = addDisableHTTPSMiddleware(stack, options); err != nil {
return err
}
return nil
}
func newServiceMetadataMiddleware_opCreateMLEndpoint(region string) *awsmiddleware.RegisterServiceMetadata {
return &awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
OperationName: "CreateMLEndpoint",
}
}
|