File: datalabeling_v1beta1.projects.evaluationJobs.html

package info (click to toggle)
python-googleapi 2.180.0-1
  • links: PTS
  • area: main
  • in suites: forky, sid
  • size: 527,124 kB
  • sloc: python: 11,076; javascript: 249; sh: 114; makefile: 59
file content (801 lines) | stat: -rw-r--r-- 103,929 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
<html><body>
<style>

body, h1, h2, h3, div, span, p, pre, a {
  margin: 0;
  padding: 0;
  border: 0;
  font-weight: inherit;
  font-style: inherit;
  font-size: 100%;
  font-family: inherit;
  vertical-align: baseline;
}

body {
  font-size: 13px;
  padding: 1em;
}

h1 {
  font-size: 26px;
  margin-bottom: 1em;
}

h2 {
  font-size: 24px;
  margin-bottom: 1em;
}

h3 {
  font-size: 20px;
  margin-bottom: 1em;
  margin-top: 1em;
}

pre, code {
  line-height: 1.5;
  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}

pre {
  margin-top: 0.5em;
}

h1, h2, h3, p {
  font-family: Arial, sans serif;
}

h1, h2, h3 {
  border-bottom: solid #CCC 1px;
}

.toc_element {
  margin-top: 0.5em;
}

.firstline {
  margin-left: 2 em;
}

.method  {
  margin-top: 1em;
  border: solid 1px #CCC;
  padding: 1em;
  background: #EEE;
}

.details {
  font-weight: bold;
  font-size: 14px;
}

</style>

<h1><a href="datalabeling_v1beta1.html">Data Labeling API</a> . <a href="datalabeling_v1beta1.projects.html">projects</a> . <a href="datalabeling_v1beta1.projects.evaluationJobs.html">evaluationJobs</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
  <code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
  <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline"> Creates an evaluation job.</p>
<p class="toc_element">
  <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
<p class="firstline">Stops and deletes an evaluation job.</p>
<p class="toc_element">
  <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets an evaluation job by resource name.</p>
<p class="toc_element">
  <code><a href="#list">list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists all evaluation jobs within a project with possible filters. Pagination is supported.</p>
<p class="toc_element">
  <code><a href="#list_next">list_next()</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
  <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates an evaluation job. You can only update certain fields of the job's EvaluationJobConfig: `humanAnnotationConfig.instruction`, `exampleCount`, and `exampleSamplePercentage`. If you want to change any other aspect of the evaluation job, you must delete the job and create a new one.</p>
<p class="toc_element">
  <code><a href="#pause">pause(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Pauses an evaluation job. Pausing an evaluation job that is already in a `PAUSED` state is a no-op.</p>
<p class="toc_element">
  <code><a href="#resume">resume(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Resumes a paused evaluation job. A deleted evaluation job can't be resumed. Resuming a running or scheduled evaluation job is a no-op.</p>
<h3>Method Details</h3>
<div class="method">
    <code class="details" id="close">close()</code>
  <pre>Close httplib2 connections.</pre>
</div>

<div class="method">
    <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
  <pre> Creates an evaluation job.

Args:
  parent: string, Required. Evaluation job resource parent. Format: &quot;projects/{project_id}&quot; (required)
  body: object, The request body.
    The object takes the form of:

{ #  Request message for CreateEvaluationJob.
  &quot;job&quot;: { # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation. # Required. The evaluation job to create.
    &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
    &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
      { # Records a failed evaluation job run.
        &quot;attemptTime&quot;: &quot;A String&quot;,
        &quot;partialFailures&quot;: [ # Details of errors that occurred.
          { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
            &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
            &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
              {
                &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
              },
            ],
            &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
          },
        ],
      },
    ],
    &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
    &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
    &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
      &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
        &quot;a_key&quot;: &quot;A String&quot;,
      },
      &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
        &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
        &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
      },
      &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
        &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
          &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
        },
      },
      &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
        &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
        &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
      },
      &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
      &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
      &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
        &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
        &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
        &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
          &quot;A String&quot;,
        ],
        &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
        &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
        &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
        &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
        &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
        &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
      },
      &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
        &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
        &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
        &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
      },
      &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
        &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
        &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
          &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
        },
        &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
          &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
        },
        &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
        &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
          &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
          &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
        },
        &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
          &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
        },
      },
      &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
        &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
        &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
        &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
          &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
        },
      },
    },
    &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
    &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
    &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
    &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
    &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation.
  &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
  &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    { # Records a failed evaluation job run.
      &quot;attemptTime&quot;: &quot;A String&quot;,
      &quot;partialFailures&quot;: [ # Details of errors that occurred.
        { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
          &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
          &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
            {
              &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
            },
          ],
          &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
        },
      ],
    },
  ],
  &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
  &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
  &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
    &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
      &quot;a_key&quot;: &quot;A String&quot;,
    },
    &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
    },
    &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
      &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
        &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
      },
    },
    &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
      &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
      &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    },
    &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
    &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
      &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
      &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
      &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
        &quot;A String&quot;,
      ],
      &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
      &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
      &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
      &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
      &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
      &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    },
    &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
    },
    &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
      &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
      &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
      },
      &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
        &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
      },
      &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
      &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
        &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
      },
      &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
        &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
      },
    },
    &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
        &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
      },
    },
  },
  &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
  &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
  &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
  &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
  &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
}</pre>
</div>

<div class="method">
    <code class="details" id="delete">delete(name, x__xgafv=None)</code>
  <pre>Stops and deletes an evaluation job.

Args:
  name: string, Required. Name of the evaluation job that is going to be deleted. Format: &quot;projects/{project_id}/evaluationJobs/{evaluation_job_id}&quot; (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}</pre>
</div>

<div class="method">
    <code class="details" id="get">get(name, x__xgafv=None)</code>
  <pre>Gets an evaluation job by resource name.

Args:
  name: string, Required. Name of the evaluation job. Format: &quot;projects/{project_id} /evaluationJobs/{evaluation_job_id}&quot; (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation.
  &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
  &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    { # Records a failed evaluation job run.
      &quot;attemptTime&quot;: &quot;A String&quot;,
      &quot;partialFailures&quot;: [ # Details of errors that occurred.
        { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
          &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
          &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
            {
              &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
            },
          ],
          &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
        },
      ],
    },
  ],
  &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
  &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
  &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
    &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
      &quot;a_key&quot;: &quot;A String&quot;,
    },
    &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
    },
    &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
      &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
        &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
      },
    },
    &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
      &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
      &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    },
    &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
    &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
      &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
      &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
      &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
        &quot;A String&quot;,
      ],
      &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
      &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
      &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
      &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
      &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
      &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    },
    &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
    },
    &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
      &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
      &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
      },
      &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
        &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
      },
      &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
      &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
        &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
      },
      &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
        &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
      },
    },
    &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
        &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
      },
    },
  },
  &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
  &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
  &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
  &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
  &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
}</pre>
</div>

<div class="method">
    <code class="details" id="list">list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</code>
  <pre>Lists all evaluation jobs within a project with possible filters. Pagination is supported.

Args:
  parent: string, Required. Evaluation job resource parent. Format: &quot;projects/{project_id}&quot; (required)
  filter: string, Optional. You can filter the jobs to list by model_id (also known as model_name, as described in EvaluationJob.modelVersion) or by evaluation job state (as described in EvaluationJob.state). To filter by both criteria, use the `AND` operator or the `OR` operator. For example, you can use the following string for your filter: &quot;evaluation_job.model_id = {model_name} AND evaluation_job.state = {evaluation_job_state}&quot;
  pageSize: integer, Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
  pageToken: string, Optional. A token identifying a page of results for the server to return. Typically obtained by the nextPageToken in the response to the previous request. The request returns the first page if this is empty.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Results for listing evaluation jobs.
  &quot;evaluationJobs&quot;: [ # The list of evaluation jobs to return.
    { # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
      &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
        { # Records a failed evaluation job run.
          &quot;attemptTime&quot;: &quot;A String&quot;,
          &quot;partialFailures&quot;: [ # Details of errors that occurred.
            { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
              &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
              &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
                {
                  &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
                },
              ],
              &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
            },
          ],
        },
      ],
      &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
      &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
      &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
        &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
          &quot;a_key&quot;: &quot;A String&quot;,
        },
        &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
          &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
          &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
        },
        &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
          &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
            &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
          },
        },
        &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
          &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
          &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
        },
        &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
        &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
        &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
          &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
          &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
          &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
            &quot;A String&quot;,
          ],
          &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
          &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
          &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
          &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
          &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
          &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
        },
        &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
          &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
          &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
          &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
        },
        &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
          &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
          &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
            &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
          },
          &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
            &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
          },
          &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
          &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
            &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
            &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
          },
          &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
            &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
          },
        },
        &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
          &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
          &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
          &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
            &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
          },
        },
      },
      &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
      &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
      &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
      &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
      &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
    },
  ],
  &quot;nextPageToken&quot;: &quot;A String&quot;, # A token to retrieve next page of results.
}</pre>
</div>

<div class="method">
    <code class="details" id="list_next">list_next()</code>
  <pre>Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call &#x27;execute()&#x27; on to request the next
          page. Returns None if there are no more items in the collection.
        </pre>
</div>

<div class="method">
    <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
  <pre>Updates an evaluation job. You can only update certain fields of the job&#x27;s EvaluationJobConfig: `humanAnnotationConfig.instruction`, `exampleCount`, and `exampleSamplePercentage`. If you want to change any other aspect of the evaluation job, you must delete the job and create a new one.

Args:
  name: string, Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot; (required)
  body: object, The request body.
    The object takes the form of:

{ # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation.
  &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
  &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    { # Records a failed evaluation job run.
      &quot;attemptTime&quot;: &quot;A String&quot;,
      &quot;partialFailures&quot;: [ # Details of errors that occurred.
        { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
          &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
          &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
            {
              &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
            },
          ],
          &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
        },
      ],
    },
  ],
  &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
  &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
  &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
    &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
      &quot;a_key&quot;: &quot;A String&quot;,
    },
    &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
    },
    &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
      &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
        &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
      },
    },
    &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
      &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
      &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    },
    &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
    &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
      &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
      &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
      &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
        &quot;A String&quot;,
      ],
      &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
      &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
      &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
      &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
      &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
      &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    },
    &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
    },
    &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
      &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
      &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
      },
      &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
        &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
      },
      &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
      &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
        &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
      },
      &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
        &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
      },
    },
    &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
        &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
      },
    },
  },
  &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
  &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
  &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
  &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
  &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
}

  updateMask: string, Optional. Mask for which fields to update. You can only provide the following fields: * `evaluationJobConfig.humanAnnotationConfig.instruction` * `evaluationJobConfig.exampleCount` * `evaluationJobConfig.exampleSamplePercentage` You can provide more than one of these fields by separating them with commas.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Defines an evaluation job that runs periodically to generate Evaluations. [Creating an evaluation job](/ml-engine/docs/continuous-evaluation/create-job) is the starting point for using continuous evaluation.
  &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: &quot;projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}&quot;
  &quot;attempts&quot;: [ # Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
    { # Records a failed evaluation job run.
      &quot;attemptTime&quot;: &quot;A String&quot;,
      &quot;partialFailures&quot;: [ # Details of errors that occurred.
        { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).
          &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
          &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
            {
              &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
            },
          ],
          &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
        },
      ],
    },
  ],
  &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp of when this evaluation job was created.
  &quot;description&quot;: &quot;A String&quot;, # Required. Description of the job. The description can be up to 25,000 characters long.
  &quot;evaluationJobConfig&quot;: { # Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob. # Required. Configuration details for the evaluation job.
    &quot;bigqueryImportKeys&quot;: { # Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * `data_json_key`: the data key for prediction input. You must provide either this key or `reference_json_key`. * `reference_json_key`: the data reference key for prediction input. You must provide either this key or `data_json_key`. * `label_json_key`: the label key for prediction output. Required. * `label_score_json_key`: the score key for prediction output. Required. * `bounding_box_json_key`: the bounding box key for prediction output. Required if your model version perform image object detection. Learn [how to configure prediction keys](/ml-engine/docs/continuous-evaluation/create-job#prediction-keys).
      &quot;a_key&quot;: &quot;A String&quot;,
    },
    &quot;boundingPolyConfig&quot;: { # Config for image bounding poly (and bounding box) human labeling task. # Specify this field if your model version performs image object detection (bounding box detection). `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;instructionMessage&quot;: &quot;A String&quot;, # Optional. Instruction message showed on contributors UI.
    },
    &quot;evaluationConfig&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Required. Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the `boundingBoxEvaluationOptions` field within this configuration. Otherwise, provide an empty object for this configuration.
      &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
        &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
      },
    },
    &quot;evaluationJobAlertConfig&quot;: { # Provides details for how an evaluation job sends email alerts based on the results of a run. # Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
      &quot;email&quot;: &quot;A String&quot;, # Required. An email address to send alerts to.
      &quot;minAcceptableMeanAveragePrecision&quot;: 3.14, # Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version&#x27;s predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
    },
    &quot;exampleCount&quot;: 42, # Required. The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides `example_sample_percentage`: even if the service has not sampled enough predictions to fulfill `example_sample_perecentage` during an interval, it stops sampling predictions when it meets this limit.
    &quot;exampleSamplePercentage&quot;: 3.14, # Required. Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
    &quot;humanAnnotationConfig&quot;: { # Configuration for how human labeling task should be done. # Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to `true` for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the `instruction` field within this configuration.
      &quot;annotatedDatasetDescription&quot;: &quot;A String&quot;, # Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
      &quot;annotatedDatasetDisplayName&quot;: &quot;A String&quot;, # Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
      &quot;contributorEmails&quot;: [ # Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
        &quot;A String&quot;,
      ],
      &quot;instruction&quot;: &quot;A String&quot;, # Required. Instruction resource name.
      &quot;labelGroup&quot;: &quot;A String&quot;, # Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
      &quot;languageCode&quot;: &quot;A String&quot;, # Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification.
      &quot;questionDuration&quot;: &quot;A String&quot;, # Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
      &quot;replicaCount&quot;: 42, # Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
      &quot;userEmailAddress&quot;: &quot;A String&quot;, # Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
    },
    &quot;imageClassificationConfig&quot;: { # Config for image classification human labeling task. # Specify this field if your model version performs image classification or general classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;answerAggregationType&quot;: &quot;A String&quot;, # Optional. The type of how to aggregate answers.
    },
    &quot;inputConfig&quot;: { # The configuration of input data, including data type, location, etc. # Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * `dataType` must be one of `IMAGE`, `TEXT`, or `GENERAL_DATA`. * `annotationType` must be one of `IMAGE_CLASSIFICATION_ANNOTATION`, `TEXT_CLASSIFICATION_ANNOTATION`, `GENERAL_CLASSIFICATION_ANNOTATION`, or `IMAGE_BOUNDING_BOX_ANNOTATION` (image object detection). * If your machine learning model performs classification, you must specify `classificationMetadata.isMultiLabel`. * You must specify `bigquerySource` (not `gcsSource`).
      &quot;annotationType&quot;: &quot;A String&quot;, # Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
      &quot;bigquerySource&quot;: { # The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version. # Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the [correct schema](/ml-engine/docs/continuous-evaluation/create-job#table-schema). Provide the table URI in the following format: &quot;bq://{your_project_id}/ {your_dataset_name}/{your_table_name}&quot; [Learn more](/ml-engine/docs/continuous-evaluation/create-job#table-schema).
      },
      &quot;classificationMetadata&quot;: { # Metadata for classification annotations. # Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
        &quot;isMultiLabel&quot;: True or False, # Whether the classification task is multi-label or not.
      },
      &quot;dataType&quot;: &quot;A String&quot;, # Required. Data type must be specifed when user tries to import data.
      &quot;gcsSource&quot;: { # Source of the Cloud Storage file to be imported. # Source located in Cloud Storage.
        &quot;inputUri&quot;: &quot;A String&quot;, # Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
        &quot;mimeType&quot;: &quot;A String&quot;, # Required. The format of the source file. Only &quot;text/csv&quot; is supported.
      },
      &quot;textMetadata&quot;: { # Metadata for the text. # Required for text import, as language code must be specified.
        &quot;languageCode&quot;: &quot;A String&quot;, # The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
      },
    },
    &quot;textClassificationConfig&quot;: { # Config for text classification human labeling task. # Specify this field if your model version performs text classification. `annotationSpecSet` in this configuration must match EvaluationJob.annotationSpecSet. `allowMultiLabel` in this configuration must match `classificationMetadata.isMultiLabel` in input_config.
      &quot;allowMultiLabel&quot;: True or False, # Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
      &quot;annotationSpecSet&quot;: &quot;A String&quot;, # Required. Annotation spec set resource name.
      &quot;sentimentConfig&quot;: { # Config for setting up sentiments. # Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
        &quot;enableLabelSentimentSelection&quot;: True or False, # If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
      },
    },
  },
  &quot;labelMissingGroundTruth&quot;: True or False, # Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to `true`. If you want to provide your own ground truth labels in the evaluation job&#x27;s BigQuery table, set this to `false`.
  &quot;modelVersion&quot;: &quot;A String&quot;, # Required. The [AI Platform Prediction model version](/ml-engine/docs/prediction-overview) to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: &quot;projects/{project_id}/models/{model_name}/versions/{version_name}&quot; There can only be one evaluation job per model version.
  &quot;name&quot;: &quot;A String&quot;, # Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: &quot;projects/{project_id}/evaluationJobs/ {evaluation_job_id}&quot;
  &quot;schedule&quot;: &quot;A String&quot;, # Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in [crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an [English-like format](/appengine/docs/standard/python/config/cronref#schedule_format). Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
  &quot;state&quot;: &quot;A String&quot;, # Output only. Describes the current state of the job.
}</pre>
</div>

<div class="method">
    <code class="details" id="pause">pause(name, body=None, x__xgafv=None)</code>
  <pre>Pauses an evaluation job. Pausing an evaluation job that is already in a `PAUSED` state is a no-op.

Args:
  name: string, Required. Name of the evaluation job that is going to be paused. Format: &quot;projects/{project_id}/evaluationJobs/{evaluation_job_id}&quot; (required)
  body: object, The request body.
    The object takes the form of:

{ # Request message for PauseEvaluationJob.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}</pre>
</div>

<div class="method">
    <code class="details" id="resume">resume(name, body=None, x__xgafv=None)</code>
  <pre>Resumes a paused evaluation job. A deleted evaluation job can&#x27;t be resumed. Resuming a running or scheduled evaluation job is a no-op.

Args:
  name: string, Required. Name of the evaluation job that is going to be resumed. Format: &quot;projects/{project_id}/evaluationJobs/{evaluation_job_id}&quot; (required)
  body: object, The request body.
    The object takes the form of:

{ # Request message ResumeEvaluationJob.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}</pre>
</div>

</body></html>