1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842
|
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="aiplatform_v1.html">Vertex AI API</a> . <a href="aiplatform_v1.projects.html">projects</a> . <a href="aiplatform_v1.projects.locations.html">locations</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.batchPredictionJobs.html">batchPredictionJobs()</a></code>
</p>
<p class="firstline">Returns the batchPredictionJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.cachedContents.html">cachedContents()</a></code>
</p>
<p class="firstline">Returns the cachedContents Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.customJobs.html">customJobs()</a></code>
</p>
<p class="firstline">Returns the customJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.dataLabelingJobs.html">dataLabelingJobs()</a></code>
</p>
<p class="firstline">Returns the dataLabelingJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.datasets.html">datasets()</a></code>
</p>
<p class="firstline">Returns the datasets Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.deploymentResourcePools.html">deploymentResourcePools()</a></code>
</p>
<p class="firstline">Returns the deploymentResourcePools Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.endpoints.html">endpoints()</a></code>
</p>
<p class="firstline">Returns the endpoints Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.featureGroups.html">featureGroups()</a></code>
</p>
<p class="firstline">Returns the featureGroups Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.featureOnlineStores.html">featureOnlineStores()</a></code>
</p>
<p class="firstline">Returns the featureOnlineStores Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.featurestores.html">featurestores()</a></code>
</p>
<p class="firstline">Returns the featurestores Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.hyperparameterTuningJobs.html">hyperparameterTuningJobs()</a></code>
</p>
<p class="firstline">Returns the hyperparameterTuningJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.indexEndpoints.html">indexEndpoints()</a></code>
</p>
<p class="firstline">Returns the indexEndpoints Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.indexes.html">indexes()</a></code>
</p>
<p class="firstline">Returns the indexes Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.metadataStores.html">metadataStores()</a></code>
</p>
<p class="firstline">Returns the metadataStores Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.migratableResources.html">migratableResources()</a></code>
</p>
<p class="firstline">Returns the migratableResources Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.modelDeploymentMonitoringJobs.html">modelDeploymentMonitoringJobs()</a></code>
</p>
<p class="firstline">Returns the modelDeploymentMonitoringJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.models.html">models()</a></code>
</p>
<p class="firstline">Returns the models Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.nasJobs.html">nasJobs()</a></code>
</p>
<p class="firstline">Returns the nasJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.notebookExecutionJobs.html">notebookExecutionJobs()</a></code>
</p>
<p class="firstline">Returns the notebookExecutionJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.notebookRuntimeTemplates.html">notebookRuntimeTemplates()</a></code>
</p>
<p class="firstline">Returns the notebookRuntimeTemplates Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.notebookRuntimes.html">notebookRuntimes()</a></code>
</p>
<p class="firstline">Returns the notebookRuntimes Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.operations.html">operations()</a></code>
</p>
<p class="firstline">Returns the operations Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.persistentResources.html">persistentResources()</a></code>
</p>
<p class="firstline">Returns the persistentResources Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.pipelineJobs.html">pipelineJobs()</a></code>
</p>
<p class="firstline">Returns the pipelineJobs Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.publishers.html">publishers()</a></code>
</p>
<p class="firstline">Returns the publishers Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.ragCorpora.html">ragCorpora()</a></code>
</p>
<p class="firstline">Returns the ragCorpora Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.ragEngineConfig.html">ragEngineConfig()</a></code>
</p>
<p class="firstline">Returns the ragEngineConfig Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.reasoningEngines.html">reasoningEngines()</a></code>
</p>
<p class="firstline">Returns the reasoningEngines Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.schedules.html">schedules()</a></code>
</p>
<p class="firstline">Returns the schedules Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.specialistPools.html">specialistPools()</a></code>
</p>
<p class="firstline">Returns the specialistPools Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.studies.html">studies()</a></code>
</p>
<p class="firstline">Returns the studies Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.tensorboards.html">tensorboards()</a></code>
</p>
<p class="firstline">Returns the tensorboards Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.trainingPipelines.html">trainingPipelines()</a></code>
</p>
<p class="firstline">Returns the trainingPipelines Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.tuningJobs.html">tuningJobs()</a></code>
</p>
<p class="firstline">Returns the tuningJobs Resource.</p>
<p class="toc_element">
<code><a href="#augmentPrompt">augmentPrompt(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Given an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses.</p>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
<code><a href="#corroborateContent">corroborateContent(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Given an input text, it returns a score that evaluates the factuality of the text. It also extracts and returns claims from the text and provides supporting facts.</p>
<p class="toc_element">
<code><a href="#deploy">deploy(destination, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Deploys a model to a new endpoint.</p>
<p class="toc_element">
<code><a href="#evaluateDataset">evaluateDataset(location, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Evaluates a dataset based on a set of given metrics.</p>
<p class="toc_element">
<code><a href="#evaluateInstances">evaluateInstances(location, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Evaluates instances based on a given metric.</p>
<p class="toc_element">
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets information about a location.</p>
<p class="toc_element">
<code><a href="#getRagEngineConfig">getRagEngineConfig(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets a RagEngineConfig.</p>
<p class="toc_element">
<code><a href="#list">list(name, extraLocationTypes=None, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists information about the supported locations for this service.</p>
<p class="toc_element">
<code><a href="#list_next">list_next()</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
<code><a href="#retrieveContexts">retrieveContexts(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Retrieves relevant contexts for a query.</p>
<p class="toc_element">
<code><a href="#updateRagEngineConfig">updateRagEngineConfig(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates a RagEngineConfig.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="augmentPrompt">augmentPrompt(parent, body=None, x__xgafv=None)</code>
<pre>Given an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses.
Args:
parent: string, Required. The resource name of the Location from which to augment prompt. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required)
body: object, The request body.
The object takes the form of:
{ # Request message for AugmentPrompt.
"contents": [ # Optional. Input content to augment, only text format is supported for now.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"model": { # Metadata of the backend deployed model. # Optional. Metadata of the backend deployed model.
"model": "A String", # Optional. The model that the user will send the augmented prompt for content generation.
"modelVersion": "A String", # Optional. The model version of the backend deployed model.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Optional. Retrieves contexts from the Vertex RagStore.
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for AugmentPrompt.
"augmentedPrompt": [ # Augmented prompt, only text format is supported for now.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"facts": [ # Retrieved facts from RAG data sources.
{ # The fact used in grounding.
"chunk": { # A RagChunk includes the content of a chunk of a RagFile, and associated metadata. # If present, chunk properties.
"pageSpan": { # Represents where the chunk starts and ends in the document. # If populated, represents where the chunk starts and ends in the document.
"firstPage": 42, # Page where chunk starts in the document. Inclusive. 1-indexed.
"lastPage": 42, # Page where chunk ends in the document. Inclusive. 1-indexed.
},
"text": "A String", # The content of the chunk.
},
"query": "A String", # Query that is used to retrieve this fact.
"score": 3.14, # If present, according to the underlying Vector DB and the selected metric type, the score can be either the distance or the similarity between the query and the fact and its range depends on the metric type. For example, if the metric type is COSINE_DISTANCE, it represents the distance between the query and the fact. The larger the distance, the less relevant the fact is to the query. The range is [0, 2], while 0 means the most relevant and 2 means the least relevant.
"summary": "A String", # If present, the summary/snippet of the fact.
"title": "A String", # If present, it refers to the title of this fact.
"uri": "A String", # If present, this uri links to the source of the fact.
"vectorDistance": 3.14, # If present, the distance between the query vector and this fact vector.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
<div class="method">
<code class="details" id="corroborateContent">corroborateContent(parent, body=None, x__xgafv=None)</code>
<pre>Given an input text, it returns a score that evaluates the factuality of the text. It also extracts and returns claims from the text and provides supporting facts.
Args:
parent: string, Required. The resource name of the Location from which to corroborate text. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required)
body: object, The request body.
The object takes the form of:
{ # Request message for CorroborateContent.
"content": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input content to corroborate, only text format is supported for now.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"facts": [ # Optional. Facts used to generate the text can also be used to corroborate the text.
{ # The fact used in grounding.
"chunk": { # A RagChunk includes the content of a chunk of a RagFile, and associated metadata. # If present, chunk properties.
"pageSpan": { # Represents where the chunk starts and ends in the document. # If populated, represents where the chunk starts and ends in the document.
"firstPage": 42, # Page where chunk starts in the document. Inclusive. 1-indexed.
"lastPage": 42, # Page where chunk ends in the document. Inclusive. 1-indexed.
},
"text": "A String", # The content of the chunk.
},
"query": "A String", # Query that is used to retrieve this fact.
"score": 3.14, # If present, according to the underlying Vector DB and the selected metric type, the score can be either the distance or the similarity between the query and the fact and its range depends on the metric type. For example, if the metric type is COSINE_DISTANCE, it represents the distance between the query and the fact. The larger the distance, the less relevant the fact is to the query. The range is [0, 2], while 0 means the most relevant and 2 means the least relevant.
"summary": "A String", # If present, the summary/snippet of the fact.
"title": "A String", # If present, it refers to the title of this fact.
"uri": "A String", # If present, this uri links to the source of the fact.
"vectorDistance": 3.14, # If present, the distance between the query vector and this fact vector.
},
],
"parameters": { # Parameters that can be overrided per request. # Optional. Parameters that can be set to override default settings per request.
"citationThreshold": 3.14, # Optional. Only return claims with citation score larger than the threshold.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for CorroborateContent.
"claims": [ # Claims that are extracted from the input content and facts that support the claims.
{ # Claim that is extracted from the input text and facts that support it.
"endIndex": 42, # Index in the input text where the claim ends (exclusive).
"factIndexes": [ # Indexes of the facts supporting this claim.
42,
],
"score": 3.14, # Confidence score of this corroboration.
"startIndex": 42, # Index in the input text where the claim starts (inclusive).
},
],
"corroborationScore": 3.14, # Confidence score of corroborating content. Value is [0,1] with 1 is the most confidence.
}</pre>
</div>
<div class="method">
<code class="details" id="deploy">deploy(destination, body=None, x__xgafv=None)</code>
<pre>Deploys a model to a new endpoint.
Args:
destination: string, Required. The resource name of the Location to deploy the model in. Format: `projects/{project}/locations/{location}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for ModelGardenService.Deploy.
"deployConfig": { # The deploy config to use for the deployment. # Optional. The deploy config to use for the deployment. If not specified, the default deploy config will be used.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # Optional. The dedicated resources to use for the endpoint. If not set, the default resources will be used.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"fastTryoutEnabled": True or False, # Optional. If true, enable the QMT fast tryout feature for this model if possible.
"systemLabels": { # Optional. System labels for Model Garden deployments. These labels are managed by Google and for tracking purposes only.
"a_key": "A String",
},
},
"endpointConfig": { # The endpoint config to use for the deployment. # Optional. The endpoint config to use for the deployment. If not specified, the default endpoint config will be used.
"dedicatedEndpointDisabled": True or False, # Optional. By default, if dedicated endpoint is enabled, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitations will be removed soon. If this field is set to true, the dedicated endpoint will be disabled and the deployed model will be exposed through the shared DNS {region}-aiplatform.googleapis.com.
"dedicatedEndpointEnabled": True or False, # Optional. Deprecated. Use dedicated_endpoint_disabled instead. If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitations will be removed soon.
"endpointDisplayName": "A String", # Optional. The user-specified display name of the endpoint. If not set, a default name will be used.
"endpointUserId": "A String", # Optional. Immutable. The ID to use for endpoint, which will become the final component of the endpoint resource name. If not provided, Vertex AI will generate a value for this ID. If the first character is a letter, this value may be up to 63 characters, and valid characters are `[a-z0-9-]`. The last character must be a letter or number. If the first character is a number, this value may be up to 9 characters, and valid characters are `[0-9]` with no leading zeros. When using HTTP/JSON, this field is populated based on a query string argument, such as `?endpoint_id=12345`. This is the fallback for fields that are not included in either the URI or the body.
},
"huggingFaceModelId": "A String", # The Hugging Face model to deploy. Format: Hugging Face model ID like `google/gemma-2-2b-it`.
"modelConfig": { # The model config to use for the deployment. # Optional. The model config to use for the deployment. If not specified, the default model config will be used.
"acceptEula": True or False, # Optional. Whether the user accepts the End User License Agreement (EULA) for the model.
"containerSpec": { # Specification of a container for serving predictions. Some fields in this message correspond to fields in the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). # Optional. The specification of the container that is to be used when deploying. If not set, the default container spec will be used.
"args": [ # Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd). Specify this field as an array of executable and arguments, similar to a Docker `CMD`'s "default parameters" form. If you don't specify this field but do specify the command field, then the command from the `command` field runs without any additional arguments. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). If you don't specify this field and don't specify the `command` field, then the container's [`ENTRYPOINT`](https://docs.docker.com/engine/reference/builder/#cmd) and `CMD` determine what runs based on their default behavior. See the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `args` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core).
"A String",
],
"command": [ # Immutable. Specifies the command that runs when the container starts. This overrides the container's [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint). Specify this field as an array of executable and arguments, similar to a Docker `ENTRYPOINT`'s "exec" form, not its "shell" form. If you do not specify this field, then the container's `ENTRYPOINT` runs, in conjunction with the args field or the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd), if either exists. If this field is not specified and the container does not have an `ENTRYPOINT`, then refer to the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). If you specify this field, then you can also specify the `args` field to provide additional arguments for this command. However, if you specify this field, then the container's `CMD` is ignored. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `command` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core).
"A String",
],
"deploymentTimeout": "A String", # Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.
"env": [ # Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable `VAR_2` to have the value `foo bar`: ```json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] ``` If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the `env` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core).
{ # Represents an environment variable present in a Container or Python Module.
"name": "A String", # Required. Name of the environment variable. Must be a valid C identifier.
"value": "A String", # Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
},
],
"grpcPorts": [ # Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers v1 core API.
{ # Represents a network port in a container.
"containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.
},
],
"healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe.
"exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command.
"command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
"A String",
],
},
"failureThreshold": 42, # Number of consecutive failures before the probe is considered failed. Defaults to 3. Minimum value is 1. Maps to Kubernetes probe argument 'failureThreshold'.
"grpc": { # GrpcAction checks the health of a container using a gRPC service. # GrpcAction probes the health of a container by sending a gRPC request.
"port": 42, # Port number of the gRPC service. Number must be in the range 1 to 65535.
"service": "A String", # Service is the name of the service to place in the gRPC HealthCheckRequest. See https://github.com/grpc/grpc/blob/master/doc/health-checking.md. If this is not specified, the default behavior is defined by gRPC.
},
"httpGet": { # HttpGetAction describes an action based on HTTP Get requests. # HttpGetAction probes the health of a container by sending an HTTP GET request.
"host": "A String", # Host name to connect to, defaults to the model serving container's IP. You probably want to set "Host" in httpHeaders instead.
"httpHeaders": [ # Custom headers to set in the request. HTTP allows repeated headers.
{ # HttpHeader describes a custom header to be used in HTTP probes
"name": "A String", # The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.
"value": "A String", # The header field value
},
],
"path": "A String", # Path to access on the HTTP server.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
"scheme": "A String", # Scheme to use for connecting to the host. Defaults to HTTP. Acceptable values are "HTTP" or "HTTPS".
},
"initialDelaySeconds": 42, # Number of seconds to wait before starting the probe. Defaults to 0. Minimum value is 0. Maps to Kubernetes probe argument 'initialDelaySeconds'.
"periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.
"successThreshold": 42, # Number of consecutive successes before the probe is considered successful. Defaults to 1. Minimum value is 1. Maps to Kubernetes probe argument 'successThreshold'.
"tcpSocket": { # TcpSocketAction probes the health of a container by opening a TCP socket connection. # TcpSocketAction probes the health of a container by opening a TCP socket connection.
"host": "A String", # Optional: Host name to connect to, defaults to the model serving container's IP.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
},
"timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'.
},
"healthRoute": "A String", # Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about [health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#health). For example, if you set this field to `/bar`, then Vertex AI intermittently sends a GET request to the `/bar` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).)
"imageUri": "A String", # Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the [container publishing requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#publishing), including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see [Custom container requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#). You can use the URI to one of Vertex AI's [pre-built container images for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) in this field.
"invokeRoutePrefix": "A String", # Immutable. Invoke route prefix for the custom container. "/*" is the only supported value right now. By setting this field, any non-root route on this model will be accessible with invoke http call eg: "/invoke/foo/bar", however the [PredictionService.Invoke] RPC is not supported yet. Only one of `predict_route` or `invoke_route_prefix` can be set, and we default to using `predict_route` if this field is not set. If this field is set, the Model can only be deployed to dedicated endpoint.
"livenessProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes liveness probe.
"exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command.
"command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
"A String",
],
},
"failureThreshold": 42, # Number of consecutive failures before the probe is considered failed. Defaults to 3. Minimum value is 1. Maps to Kubernetes probe argument 'failureThreshold'.
"grpc": { # GrpcAction checks the health of a container using a gRPC service. # GrpcAction probes the health of a container by sending a gRPC request.
"port": 42, # Port number of the gRPC service. Number must be in the range 1 to 65535.
"service": "A String", # Service is the name of the service to place in the gRPC HealthCheckRequest. See https://github.com/grpc/grpc/blob/master/doc/health-checking.md. If this is not specified, the default behavior is defined by gRPC.
},
"httpGet": { # HttpGetAction describes an action based on HTTP Get requests. # HttpGetAction probes the health of a container by sending an HTTP GET request.
"host": "A String", # Host name to connect to, defaults to the model serving container's IP. You probably want to set "Host" in httpHeaders instead.
"httpHeaders": [ # Custom headers to set in the request. HTTP allows repeated headers.
{ # HttpHeader describes a custom header to be used in HTTP probes
"name": "A String", # The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.
"value": "A String", # The header field value
},
],
"path": "A String", # Path to access on the HTTP server.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
"scheme": "A String", # Scheme to use for connecting to the host. Defaults to HTTP. Acceptable values are "HTTP" or "HTTPS".
},
"initialDelaySeconds": 42, # Number of seconds to wait before starting the probe. Defaults to 0. Minimum value is 0. Maps to Kubernetes probe argument 'initialDelaySeconds'.
"periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.
"successThreshold": 42, # Number of consecutive successes before the probe is considered successful. Defaults to 1. Minimum value is 1. Maps to Kubernetes probe argument 'successThreshold'.
"tcpSocket": { # TcpSocketAction probes the health of a container by opening a TCP socket connection. # TcpSocketAction probes the health of a container by opening a TCP socket connection.
"host": "A String", # Optional: Host name to connect to, defaults to the model serving container's IP.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
},
"timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'.
},
"ports": [ # Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends [liveness and health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#liveness) to this port. If you do not specify this field, it defaults to following value: ```json [ { "containerPort": 8080 } ] ``` Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core).
{ # Represents a network port in a container.
"containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.
},
],
"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).)
"sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes.
"startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe.
"exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command.
"command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
"A String",
],
},
"failureThreshold": 42, # Number of consecutive failures before the probe is considered failed. Defaults to 3. Minimum value is 1. Maps to Kubernetes probe argument 'failureThreshold'.
"grpc": { # GrpcAction checks the health of a container using a gRPC service. # GrpcAction probes the health of a container by sending a gRPC request.
"port": 42, # Port number of the gRPC service. Number must be in the range 1 to 65535.
"service": "A String", # Service is the name of the service to place in the gRPC HealthCheckRequest. See https://github.com/grpc/grpc/blob/master/doc/health-checking.md. If this is not specified, the default behavior is defined by gRPC.
},
"httpGet": { # HttpGetAction describes an action based on HTTP Get requests. # HttpGetAction probes the health of a container by sending an HTTP GET request.
"host": "A String", # Host name to connect to, defaults to the model serving container's IP. You probably want to set "Host" in httpHeaders instead.
"httpHeaders": [ # Custom headers to set in the request. HTTP allows repeated headers.
{ # HttpHeader describes a custom header to be used in HTTP probes
"name": "A String", # The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.
"value": "A String", # The header field value
},
],
"path": "A String", # Path to access on the HTTP server.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
"scheme": "A String", # Scheme to use for connecting to the host. Defaults to HTTP. Acceptable values are "HTTP" or "HTTPS".
},
"initialDelaySeconds": 42, # Number of seconds to wait before starting the probe. Defaults to 0. Minimum value is 0. Maps to Kubernetes probe argument 'initialDelaySeconds'.
"periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.
"successThreshold": 42, # Number of consecutive successes before the probe is considered successful. Defaults to 1. Minimum value is 1. Maps to Kubernetes probe argument 'successThreshold'.
"tcpSocket": { # TcpSocketAction probes the health of a container by opening a TCP socket connection. # TcpSocketAction probes the health of a container by opening a TCP socket connection.
"host": "A String", # Optional: Host name to connect to, defaults to the model serving container's IP.
"port": 42, # Number of the port to access on the container. Number must be in the range 1 to 65535.
},
"timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'.
},
},
"huggingFaceAccessToken": "A String", # Optional. The Hugging Face read access token used to access the model artifacts of gated models.
"huggingFaceCacheEnabled": True or False, # Optional. If true, the model will deploy with a cached version instead of directly downloading the model artifacts from Hugging Face. This is suitable for VPC-SC users with limited internet access.
"modelDisplayName": "A String", # Optional. The user-specified display name of the uploaded model. If not set, a default name will be used.
"modelUserId": "A String", # Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name. When not provided, Vertex AI will generate a value for this ID. When Model Registry model is provided, this field will be ignored. This value may be up to 63 characters, and valid characters are `[a-z0-9_-]`. The first character cannot be a number or hyphen.
},
"publisherModelName": "A String", # The Model Garden model to deploy. Format: `publishers/{publisher}/models/{publisher_model}@{version_id}`, or `publishers/hf-{hugging-face-author}/models/{hugging-face-model-name}@001`.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="evaluateDataset">evaluateDataset(location, body=None, x__xgafv=None)</code>
<pre>Evaluates a dataset based on a set of given metrics.
Args:
location: string, Required. The resource name of the Location to evaluate the dataset. Format: `projects/{project}/locations/{location}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EvaluationService.EvaluateDataset.
"autoraterConfig": { # The configs for autorater. This is applicable to both EvaluateInstances and EvaluateDataset. # Optional. Autorater config used for evaluation. Currently only publisher Gemini models are supported. Format: `projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}.`
"autoraterModel": "A String", # Optional. The fully qualified name of the publisher model or tuned autorater endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Tuned model endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
"flipEnabled": True or False, # Optional. Default is true. Whether to flip the candidate and baseline responses. This is only applicable to the pairwise metric. If enabled, also provide PairwiseMetricSpec.candidate_response_field_name and PairwiseMetricSpec.baseline_response_field_name. When rendering PairwiseMetricSpec.metric_prompt_template, the candidate and baseline fields will be flipped for half of the samples to reduce bias.
"samplingCount": 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
"dataset": { # The dataset used for evaluation. # Required. The dataset used for evaluation.
"bigquerySource": { # The BigQuery location for the input content. # BigQuery source holds the dataset.
"inputUri": "A String", # Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: `bq://projectId.bqDatasetId.bqTableId`.
},
"gcsSource": { # The Google Cloud Storage location for the input content. # Cloud storage source holds the dataset. Currently only one Cloud Storage file path is supported.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"metrics": [ # Required. The metrics used for evaluation.
{ # The metric used for running evaluations.
"aggregationMetrics": [ # Optional. The aggregation metrics to use.
"A String",
],
"bleuSpec": { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Spec for bleu metric.
"useEffectiveOrder": True or False, # Optional. Whether to use_effective_order to compute bleu score.
},
"exactMatchSpec": { # Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0. # Spec for exact match metric.
},
"pairwiseMetricSpec": { # Spec for pairwise metric. # Spec for pairwise metric.
"baselineResponseFieldName": "A String", # Optional. The field name of the baseline response.
"candidateResponseFieldName": "A String", # Optional. The field name of the candidate response.
"customOutputFormatConfig": { # Spec for custom output format configuration. # Optional. CustomOutputFormatConfig allows customization of metric output. When this config is set, the default output is replaced with the raw output string. If a custom format is chosen, the `pairwise_choice` and `explanation` fields in the corresponding metric result will be empty.
"returnRawOutput": True or False, # Optional. Whether to return raw output.
},
"metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric.
"systemInstruction": "A String", # Optional. System instructions for pairwise metric.
},
"pointwiseMetricSpec": { # Spec for pointwise metric. # Spec for pointwise metric.
"customOutputFormatConfig": { # Spec for custom output format configuration. # Optional. CustomOutputFormatConfig allows customization of metric output. By default, metrics return a score and explanation. When this config is set, the default output is replaced with either: - The raw output string. - A parsed output based on a user-defined schema. If a custom format is chosen, the `score` and `explanation` fields in the corresponding metric result will be empty.
"returnRawOutput": True or False, # Optional. Whether to return raw output.
},
"metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric.
"systemInstruction": "A String", # Optional. System instructions for pointwise metric.
},
"rougeSpec": { # Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1. # Spec for rouge metric.
"rougeType": "A String", # Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.
"splitSummaries": True or False, # Optional. Whether to split summaries while using rougeLsum.
"useStemmer": True or False, # Optional. Whether to use stemmer to compute rouge score.
},
},
],
"outputConfig": { # Config for evaluation output. # Required. Config for evaluation output.
"gcsDestination": { # The Google Cloud Storage location where the output is to be written to. # Cloud storage destination for evaluation output.
"outputUriPrefix": "A String", # Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
},
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="evaluateInstances">evaluateInstances(location, body=None, x__xgafv=None)</code>
<pre>Evaluates instances based on a given metric.
Args:
location: string, Required. The resource name of the Location to evaluate the instances. Format: `projects/{project}/locations/{location}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EvaluationService.EvaluateInstances.
"autoraterConfig": { # The configs for autorater. This is applicable to both EvaluateInstances and EvaluateDataset. # Optional. Autorater config used for evaluation.
"autoraterModel": "A String", # Optional. The fully qualified name of the publisher model or tuned autorater endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Tuned model endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
"flipEnabled": True or False, # Optional. Default is true. Whether to flip the candidate and baseline responses. This is only applicable to the pairwise metric. If enabled, also provide PairwiseMetricSpec.candidate_response_field_name and PairwiseMetricSpec.baseline_response_field_name. When rendering PairwiseMetricSpec.metric_prompt_template, the candidate and baseline fields will be flipped for half of the samples to reduce bias.
"samplingCount": 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
"bleuInput": { # Input for bleu metric. # Instances and metric spec for bleu metric.
"instances": [ # Required. Repeated bleu instances.
{ # Spec for bleu instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1. # Required. Spec for bleu score metric.
"useEffectiveOrder": True or False, # Optional. Whether to use_effective_order to compute bleu score.
},
},
"coherenceInput": { # Input for coherence metric. # Input for coherence metric.
"instance": { # Spec for coherence instance. # Required. Coherence instance.
"prediction": "A String", # Required. Output of the evaluated model.
},
"metricSpec": { # Spec for coherence score metric. # Required. Spec for coherence score metric.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"cometInput": { # Input for Comet metric. # Translation metrics. Input for Comet metric.
"instance": { # Spec for Comet instance - The fields used for evaluation are dependent on the comet version. # Required. Comet instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
"source": "A String", # Optional. Source text in original language.
},
"metricSpec": { # Spec for Comet metric. # Required. Spec for comet metric.
"sourceLanguage": "A String", # Optional. Source language in BCP-47 format.
"targetLanguage": "A String", # Optional. Target language in BCP-47 format. Covers both prediction and reference.
"version": "A String", # Required. Which version to use for evaluation.
},
},
"exactMatchInput": { # Input for exact match metric. # Auto metric instances. Instances and metric spec for exact match metric.
"instances": [ # Required. Repeated exact match instances.
{ # Spec for exact match instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0. # Required. Spec for exact match metric.
},
},
"fluencyInput": { # Input for fluency metric. # LLM-based metric instance. General text generation metrics, applicable to other categories. Input for fluency metric.
"instance": { # Spec for fluency instance. # Required. Fluency instance.
"prediction": "A String", # Required. Output of the evaluated model.
},
"metricSpec": { # Spec for fluency score metric. # Required. Spec for fluency score metric.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"fulfillmentInput": { # Input for fulfillment metric. # Input for fulfillment metric.
"instance": { # Spec for fulfillment instance. # Required. Fulfillment instance.
"instruction": "A String", # Required. Inference instruction prompt to compare prediction with.
"prediction": "A String", # Required. Output of the evaluated model.
},
"metricSpec": { # Spec for fulfillment metric. # Required. Spec for fulfillment score metric.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"groundednessInput": { # Input for groundedness metric. # Input for groundedness metric.
"instance": { # Spec for groundedness instance. # Required. Groundedness instance.
"context": "A String", # Required. Background information provided in context used to compare against the prediction.
"prediction": "A String", # Required. Output of the evaluated model.
},
"metricSpec": { # Spec for groundedness metric. # Required. Spec for groundedness metric.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"metricxInput": { # Input for MetricX metric. # Input for Metricx metric.
"instance": { # Spec for MetricX instance - The fields used for evaluation are dependent on the MetricX version. # Required. Metricx instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
"source": "A String", # Optional. Source text in original language.
},
"metricSpec": { # Spec for MetricX metric. # Required. Spec for Metricx metric.
"sourceLanguage": "A String", # Optional. Source language in BCP-47 format.
"targetLanguage": "A String", # Optional. Target language in BCP-47 format. Covers both prediction and reference.
"version": "A String", # Required. Which version to use for evaluation.
},
},
"pairwiseMetricInput": { # Input for pairwise metric. # Input for pairwise metric.
"instance": { # Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pairwise metric instance.
"contentMapInstance": { # Map of placeholder in metric prompt template to contents of model input. # Key-value contents for the mutlimodality input, including text, image, video, audio, and pdf, etc. The key is placeholder in metric prompt template, and the value is the multimodal content.
"values": { # Optional. Map of placeholder to contents.
"a_key": { # Repeated Content type.
"contents": [ # Optional. Repeated contents.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
},
},
},
"jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template.
},
"metricSpec": { # Spec for pairwise metric. # Required. Spec for pairwise metric.
"baselineResponseFieldName": "A String", # Optional. The field name of the baseline response.
"candidateResponseFieldName": "A String", # Optional. The field name of the candidate response.
"customOutputFormatConfig": { # Spec for custom output format configuration. # Optional. CustomOutputFormatConfig allows customization of metric output. When this config is set, the default output is replaced with the raw output string. If a custom format is chosen, the `pairwise_choice` and `explanation` fields in the corresponding metric result will be empty.
"returnRawOutput": True or False, # Optional. Whether to return raw output.
},
"metricPromptTemplate": "A String", # Required. Metric prompt template for pairwise metric.
"systemInstruction": "A String", # Optional. System instructions for pairwise metric.
},
},
"pairwiseQuestionAnsweringQualityInput": { # Input for pairwise question answering quality metric. # Input for pairwise question answering quality metric.
"instance": { # Spec for pairwise question answering quality instance. # Required. Pairwise question answering quality instance.
"baselinePrediction": "A String", # Required. Output of the baseline model.
"context": "A String", # Required. Text to answer the question.
"instruction": "A String", # Required. Question Answering prompt for LLM.
"prediction": "A String", # Required. Output of the candidate model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for pairwise question answering quality score metric. # Required. Spec for pairwise question answering quality score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"pairwiseSummarizationQualityInput": { # Input for pairwise summarization quality metric. # Input for pairwise summarization quality metric.
"instance": { # Spec for pairwise summarization quality instance. # Required. Pairwise summarization quality instance.
"baselinePrediction": "A String", # Required. Output of the baseline model.
"context": "A String", # Required. Text to be summarized.
"instruction": "A String", # Required. Summarization prompt for LLM.
"prediction": "A String", # Required. Output of the candidate model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for pairwise summarization quality score metric. # Required. Spec for pairwise summarization quality score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute pairwise summarization quality.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"pointwiseMetricInput": { # Input for pointwise metric. # Input for pointwise metric.
"instance": { # Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset. # Required. Pointwise metric instance.
"contentMapInstance": { # Map of placeholder in metric prompt template to contents of model input. # Key-value contents for the mutlimodality input, including text, image, video, audio, and pdf, etc. The key is placeholder in metric prompt template, and the value is the multimodal content.
"values": { # Optional. Map of placeholder to contents.
"a_key": { # Repeated Content type.
"contents": [ # Optional. Repeated contents.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
},
},
},
"jsonInstance": "A String", # Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template.
},
"metricSpec": { # Spec for pointwise metric. # Required. Spec for pointwise metric.
"customOutputFormatConfig": { # Spec for custom output format configuration. # Optional. CustomOutputFormatConfig allows customization of metric output. By default, metrics return a score and explanation. When this config is set, the default output is replaced with either: - The raw output string. - A parsed output based on a user-defined schema. If a custom format is chosen, the `score` and `explanation` fields in the corresponding metric result will be empty.
"returnRawOutput": True or False, # Optional. Whether to return raw output.
},
"metricPromptTemplate": "A String", # Required. Metric prompt template for pointwise metric.
"systemInstruction": "A String", # Optional. System instructions for pointwise metric.
},
},
"questionAnsweringCorrectnessInput": { # Input for question answering correctness metric. # Input for question answering correctness metric.
"instance": { # Spec for question answering correctness instance. # Required. Question answering correctness instance.
"context": "A String", # Optional. Text provided as context to answer the question.
"instruction": "A String", # Required. The question asked and other instruction in the inference prompt.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for question answering correctness metric. # Required. Spec for question answering correctness score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute question answering correctness.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"questionAnsweringHelpfulnessInput": { # Input for question answering helpfulness metric. # Input for question answering helpfulness metric.
"instance": { # Spec for question answering helpfulness instance. # Required. Question answering helpfulness instance.
"context": "A String", # Optional. Text provided as context to answer the question.
"instruction": "A String", # Required. The question asked and other instruction in the inference prompt.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for question answering helpfulness metric. # Required. Spec for question answering helpfulness score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute question answering helpfulness.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"questionAnsweringQualityInput": { # Input for question answering quality metric. # Input for question answering quality metric.
"instance": { # Spec for question answering quality instance. # Required. Question answering quality instance.
"context": "A String", # Required. Text to answer the question.
"instruction": "A String", # Required. Question Answering prompt for LLM.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for question answering quality score metric. # Required. Spec for question answering quality score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute question answering quality.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"questionAnsweringRelevanceInput": { # Input for question answering relevance metric. # Input for question answering relevance metric.
"instance": { # Spec for question answering relevance instance. # Required. Question answering relevance instance.
"context": "A String", # Optional. Text provided as context to answer the question.
"instruction": "A String", # Required. The question asked and other instruction in the inference prompt.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for question answering relevance metric. # Required. Spec for question answering relevance score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute question answering relevance.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"rougeInput": { # Input for rouge metric. # Instances and metric spec for rouge metric.
"instances": [ # Required. Repeated rouge instances.
{ # Spec for rouge instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1. # Required. Spec for rouge score metric.
"rougeType": "A String", # Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.
"splitSummaries": True or False, # Optional. Whether to split summaries while using rougeLsum.
"useStemmer": True or False, # Optional. Whether to use stemmer to compute rouge score.
},
},
"rubricBasedInstructionFollowingInput": { # Instance and metric spec for RubricBasedInstructionFollowing metric. # Rubric Based Instruction Following metric.
"instance": { # Instance for RubricBasedInstructionFollowing metric - one instance corresponds to one row in an evaluation dataset. # Required. Instance for RubricBasedInstructionFollowing metric.
"jsonInstance": "A String", # Required. Instance specified as a json string. String key-value pairs are expected in the json_instance to render RubricBasedInstructionFollowing prompt templates.
},
"metricSpec": { # Spec for RubricBasedInstructionFollowing metric - returns rubrics and verdicts corresponding to rubrics along with overall score. # Required. Spec for RubricBasedInstructionFollowing metric.
},
},
"safetyInput": { # Input for safety metric. # Input for safety metric.
"instance": { # Spec for safety instance. # Required. Safety instance.
"prediction": "A String", # Required. Output of the evaluated model.
},
"metricSpec": { # Spec for safety metric. # Required. Spec for safety metric.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"summarizationHelpfulnessInput": { # Input for summarization helpfulness metric. # Input for summarization helpfulness metric.
"instance": { # Spec for summarization helpfulness instance. # Required. Summarization helpfulness instance.
"context": "A String", # Required. Text to be summarized.
"instruction": "A String", # Optional. Summarization prompt for LLM.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for summarization helpfulness score metric. # Required. Spec for summarization helpfulness score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute summarization helpfulness.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"summarizationQualityInput": { # Input for summarization quality metric. # Input for summarization quality metric.
"instance": { # Spec for summarization quality instance. # Required. Summarization quality instance.
"context": "A String", # Required. Text to be summarized.
"instruction": "A String", # Required. Summarization prompt for LLM.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for summarization quality score metric. # Required. Spec for summarization quality score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute summarization quality.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"summarizationVerbosityInput": { # Input for summarization verbosity metric. # Input for summarization verbosity metric.
"instance": { # Spec for summarization verbosity instance. # Required. Summarization verbosity instance.
"context": "A String", # Required. Text to be summarized.
"instruction": "A String", # Optional. Summarization prompt for LLM.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Optional. Ground truth used to compare against the prediction.
},
"metricSpec": { # Spec for summarization verbosity score metric. # Required. Spec for summarization verbosity score metric.
"useReference": True or False, # Optional. Whether to use instance.reference to compute summarization verbosity.
"version": 42, # Optional. Which version to use for evaluation.
},
},
"toolCallValidInput": { # Input for tool call valid metric. # Tool call metric instances. Input for tool call valid metric.
"instances": [ # Required. Repeated tool call valid instances.
{ # Spec for tool call valid instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for tool call valid metric. # Required. Spec for tool call valid metric.
},
},
"toolNameMatchInput": { # Input for tool name match metric. # Input for tool name match metric.
"instances": [ # Required. Repeated tool name match instances.
{ # Spec for tool name match instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for tool name match metric. # Required. Spec for tool name match metric.
},
},
"toolParameterKeyMatchInput": { # Input for tool parameter key match metric. # Input for tool parameter key match metric.
"instances": [ # Required. Repeated tool parameter key match instances.
{ # Spec for tool parameter key match instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for tool parameter key match metric. # Required. Spec for tool parameter key match metric.
},
},
"toolParameterKvMatchInput": { # Input for tool parameter key value match metric. # Input for tool parameter key value match metric.
"instances": [ # Required. Repeated tool parameter key value match instances.
{ # Spec for tool parameter key value match instance.
"prediction": "A String", # Required. Output of the evaluated model.
"reference": "A String", # Required. Ground truth used to compare against the prediction.
},
],
"metricSpec": { # Spec for tool parameter key value match metric. # Required. Spec for tool parameter key value match metric.
"useStrictStringMatch": True or False, # Optional. Whether to use STRICT string match on parameter values.
},
},
"trajectoryAnyOrderMatchInput": { # Instances and metric spec for TrajectoryAnyOrderMatch metric. # Input for trajectory match any order metric.
"instances": [ # Required. Repeated TrajectoryAnyOrderMatch instance.
{ # Spec for TrajectoryAnyOrderMatch instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
"referenceTrajectory": { # Spec for trajectory. # Required. Spec for reference tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectoryAnyOrderMatch metric - returns 1 if all tool calls in the reference trajectory appear in the predicted trajectory in any order, else 0. # Required. Spec for TrajectoryAnyOrderMatch metric.
},
},
"trajectoryExactMatchInput": { # Instances and metric spec for TrajectoryExactMatch metric. # Input for trajectory exact match metric.
"instances": [ # Required. Repeated TrajectoryExactMatch instance.
{ # Spec for TrajectoryExactMatch instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
"referenceTrajectory": { # Spec for trajectory. # Required. Spec for reference tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectoryExactMatch metric - returns 1 if tool calls in the reference trajectory exactly match the predicted trajectory, else 0. # Required. Spec for TrajectoryExactMatch metric.
},
},
"trajectoryInOrderMatchInput": { # Instances and metric spec for TrajectoryInOrderMatch metric. # Input for trajectory in order match metric.
"instances": [ # Required. Repeated TrajectoryInOrderMatch instance.
{ # Spec for TrajectoryInOrderMatch instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
"referenceTrajectory": { # Spec for trajectory. # Required. Spec for reference tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectoryInOrderMatch metric - returns 1 if tool calls in the reference trajectory appear in the predicted trajectory in the same order, else 0. # Required. Spec for TrajectoryInOrderMatch metric.
},
},
"trajectoryPrecisionInput": { # Instances and metric spec for TrajectoryPrecision metric. # Input for trajectory precision metric.
"instances": [ # Required. Repeated TrajectoryPrecision instance.
{ # Spec for TrajectoryPrecision instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
"referenceTrajectory": { # Spec for trajectory. # Required. Spec for reference tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectoryPrecision metric - returns a float score based on average precision of individual tool calls. # Required. Spec for TrajectoryPrecision metric.
},
},
"trajectoryRecallInput": { # Instances and metric spec for TrajectoryRecall metric. # Input for trajectory recall metric.
"instances": [ # Required. Repeated TrajectoryRecall instance.
{ # Spec for TrajectoryRecall instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
"referenceTrajectory": { # Spec for trajectory. # Required. Spec for reference tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectoryRecall metric - returns a float score based on average recall of individual tool calls. # Required. Spec for TrajectoryRecall metric.
},
},
"trajectorySingleToolUseInput": { # Instances and metric spec for TrajectorySingleToolUse metric. # Input for trajectory single tool use metric.
"instances": [ # Required. Repeated TrajectorySingleToolUse instance.
{ # Spec for TrajectorySingleToolUse instance.
"predictedTrajectory": { # Spec for trajectory. # Required. Spec for predicted tool call trajectory.
"toolCalls": [ # Required. Tool calls in the trajectory.
{ # Spec for tool call.
"toolInput": "A String", # Optional. Spec for tool input
"toolName": "A String", # Required. Spec for tool name
},
],
},
},
],
"metricSpec": { # Spec for TrajectorySingleToolUse metric - returns 1 if tool is present in the predicted trajectory, else 0. # Required. Spec for TrajectorySingleToolUse metric.
"toolName": "A String", # Required. Spec for tool name to be checked for in the predicted trajectory.
},
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for EvaluationService.EvaluateInstances.
"bleuResults": { # Results for bleu metric. # Results for bleu metric.
"bleuMetricValues": [ # Output only. Bleu metric values.
{ # Bleu metric value for an instance.
"score": 3.14, # Output only. Bleu score.
},
],
},
"coherenceResult": { # Spec for coherence result. # Result for coherence metric.
"confidence": 3.14, # Output only. Confidence for coherence score.
"explanation": "A String", # Output only. Explanation for coherence score.
"score": 3.14, # Output only. Coherence score.
},
"cometResult": { # Spec for Comet result - calculates the comet score for the given instance using the version specified in the spec. # Translation metrics. Result for Comet metric.
"score": 3.14, # Output only. Comet score. Range depends on version.
},
"exactMatchResults": { # Results for exact match metric. # Auto metric evaluation results. Results for exact match metric.
"exactMatchMetricValues": [ # Output only. Exact match metric values.
{ # Exact match metric value for an instance.
"score": 3.14, # Output only. Exact match score.
},
],
},
"fluencyResult": { # Spec for fluency result. # LLM-based metric evaluation result. General text generation metrics, applicable to other categories. Result for fluency metric.
"confidence": 3.14, # Output only. Confidence for fluency score.
"explanation": "A String", # Output only. Explanation for fluency score.
"score": 3.14, # Output only. Fluency score.
},
"fulfillmentResult": { # Spec for fulfillment result. # Result for fulfillment metric.
"confidence": 3.14, # Output only. Confidence for fulfillment score.
"explanation": "A String", # Output only. Explanation for fulfillment score.
"score": 3.14, # Output only. Fulfillment score.
},
"groundednessResult": { # Spec for groundedness result. # Result for groundedness metric.
"confidence": 3.14, # Output only. Confidence for groundedness score.
"explanation": "A String", # Output only. Explanation for groundedness score.
"score": 3.14, # Output only. Groundedness score.
},
"metricxResult": { # Spec for MetricX result - calculates the MetricX score for the given instance using the version specified in the spec. # Result for Metricx metric.
"score": 3.14, # Output only. MetricX score. Range depends on version.
},
"pairwiseMetricResult": { # Spec for pairwise metric result. # Result for pairwise metric.
"customOutput": { # Spec for custom output. # Output only. Spec for custom output.
"rawOutputs": { # Raw output. # Output only. List of raw output strings.
"rawOutput": [ # Output only. Raw output string.
"A String",
],
},
},
"explanation": "A String", # Output only. Explanation for pairwise metric score.
"pairwiseChoice": "A String", # Output only. Pairwise metric choice.
},
"pairwiseQuestionAnsweringQualityResult": { # Spec for pairwise question answering quality result. # Result for pairwise question answering quality metric.
"confidence": 3.14, # Output only. Confidence for question answering quality score.
"explanation": "A String", # Output only. Explanation for question answering quality score.
"pairwiseChoice": "A String", # Output only. Pairwise question answering prediction choice.
},
"pairwiseSummarizationQualityResult": { # Spec for pairwise summarization quality result. # Result for pairwise summarization quality metric.
"confidence": 3.14, # Output only. Confidence for summarization quality score.
"explanation": "A String", # Output only. Explanation for summarization quality score.
"pairwiseChoice": "A String", # Output only. Pairwise summarization prediction choice.
},
"pointwiseMetricResult": { # Spec for pointwise metric result. # Generic metrics. Result for pointwise metric.
"customOutput": { # Spec for custom output. # Output only. Spec for custom output.
"rawOutputs": { # Raw output. # Output only. List of raw output strings.
"rawOutput": [ # Output only. Raw output string.
"A String",
],
},
},
"explanation": "A String", # Output only. Explanation for pointwise metric score.
"score": 3.14, # Output only. Pointwise metric score.
},
"questionAnsweringCorrectnessResult": { # Spec for question answering correctness result. # Result for question answering correctness metric.
"confidence": 3.14, # Output only. Confidence for question answering correctness score.
"explanation": "A String", # Output only. Explanation for question answering correctness score.
"score": 3.14, # Output only. Question Answering Correctness score.
},
"questionAnsweringHelpfulnessResult": { # Spec for question answering helpfulness result. # Result for question answering helpfulness metric.
"confidence": 3.14, # Output only. Confidence for question answering helpfulness score.
"explanation": "A String", # Output only. Explanation for question answering helpfulness score.
"score": 3.14, # Output only. Question Answering Helpfulness score.
},
"questionAnsweringQualityResult": { # Spec for question answering quality result. # Question answering only metrics. Result for question answering quality metric.
"confidence": 3.14, # Output only. Confidence for question answering quality score.
"explanation": "A String", # Output only. Explanation for question answering quality score.
"score": 3.14, # Output only. Question Answering Quality score.
},
"questionAnsweringRelevanceResult": { # Spec for question answering relevance result. # Result for question answering relevance metric.
"confidence": 3.14, # Output only. Confidence for question answering relevance score.
"explanation": "A String", # Output only. Explanation for question answering relevance score.
"score": 3.14, # Output only. Question Answering Relevance score.
},
"rougeResults": { # Results for rouge metric. # Results for rouge metric.
"rougeMetricValues": [ # Output only. Rouge metric values.
{ # Rouge metric value for an instance.
"score": 3.14, # Output only. Rouge score.
},
],
},
"rubricBasedInstructionFollowingResult": { # Result for RubricBasedInstructionFollowing metric. # Result for rubric based instruction following metric.
"rubricCritiqueResults": [ # Output only. List of per rubric critique results.
{ # Rubric critique result.
"rubric": "A String", # Output only. Rubric to be evaluated.
"verdict": True or False, # Output only. Verdict for the rubric - true if the rubric is met, false otherwise.
},
],
"score": 3.14, # Output only. Overall score for the instruction following.
},
"safetyResult": { # Spec for safety result. # Result for safety metric.
"confidence": 3.14, # Output only. Confidence for safety score.
"explanation": "A String", # Output only. Explanation for safety score.
"score": 3.14, # Output only. Safety score.
},
"summarizationHelpfulnessResult": { # Spec for summarization helpfulness result. # Result for summarization helpfulness metric.
"confidence": 3.14, # Output only. Confidence for summarization helpfulness score.
"explanation": "A String", # Output only. Explanation for summarization helpfulness score.
"score": 3.14, # Output only. Summarization Helpfulness score.
},
"summarizationQualityResult": { # Spec for summarization quality result. # Summarization only metrics. Result for summarization quality metric.
"confidence": 3.14, # Output only. Confidence for summarization quality score.
"explanation": "A String", # Output only. Explanation for summarization quality score.
"score": 3.14, # Output only. Summarization Quality score.
},
"summarizationVerbosityResult": { # Spec for summarization verbosity result. # Result for summarization verbosity metric.
"confidence": 3.14, # Output only. Confidence for summarization verbosity score.
"explanation": "A String", # Output only. Explanation for summarization verbosity score.
"score": 3.14, # Output only. Summarization Verbosity score.
},
"toolCallValidResults": { # Results for tool call valid metric. # Tool call metrics. Results for tool call valid metric.
"toolCallValidMetricValues": [ # Output only. Tool call valid metric values.
{ # Tool call valid metric value for an instance.
"score": 3.14, # Output only. Tool call valid score.
},
],
},
"toolNameMatchResults": { # Results for tool name match metric. # Results for tool name match metric.
"toolNameMatchMetricValues": [ # Output only. Tool name match metric values.
{ # Tool name match metric value for an instance.
"score": 3.14, # Output only. Tool name match score.
},
],
},
"toolParameterKeyMatchResults": { # Results for tool parameter key match metric. # Results for tool parameter key match metric.
"toolParameterKeyMatchMetricValues": [ # Output only. Tool parameter key match metric values.
{ # Tool parameter key match metric value for an instance.
"score": 3.14, # Output only. Tool parameter key match score.
},
],
},
"toolParameterKvMatchResults": { # Results for tool parameter key value match metric. # Results for tool parameter key value match metric.
"toolParameterKvMatchMetricValues": [ # Output only. Tool parameter key value match metric values.
{ # Tool parameter key value match metric value for an instance.
"score": 3.14, # Output only. Tool parameter key value match score.
},
],
},
"trajectoryAnyOrderMatchResults": { # Results for TrajectoryAnyOrderMatch metric. # Result for trajectory any order match metric.
"trajectoryAnyOrderMatchMetricValues": [ # Output only. TrajectoryAnyOrderMatch metric values.
{ # TrajectoryAnyOrderMatch metric value for an instance.
"score": 3.14, # Output only. TrajectoryAnyOrderMatch score.
},
],
},
"trajectoryExactMatchResults": { # Results for TrajectoryExactMatch metric. # Result for trajectory exact match metric.
"trajectoryExactMatchMetricValues": [ # Output only. TrajectoryExactMatch metric values.
{ # TrajectoryExactMatch metric value for an instance.
"score": 3.14, # Output only. TrajectoryExactMatch score.
},
],
},
"trajectoryInOrderMatchResults": { # Results for TrajectoryInOrderMatch metric. # Result for trajectory in order match metric.
"trajectoryInOrderMatchMetricValues": [ # Output only. TrajectoryInOrderMatch metric values.
{ # TrajectoryInOrderMatch metric value for an instance.
"score": 3.14, # Output only. TrajectoryInOrderMatch score.
},
],
},
"trajectoryPrecisionResults": { # Results for TrajectoryPrecision metric. # Result for trajectory precision metric.
"trajectoryPrecisionMetricValues": [ # Output only. TrajectoryPrecision metric values.
{ # TrajectoryPrecision metric value for an instance.
"score": 3.14, # Output only. TrajectoryPrecision score.
},
],
},
"trajectoryRecallResults": { # Results for TrajectoryRecall metric. # Results for trajectory recall metric.
"trajectoryRecallMetricValues": [ # Output only. TrajectoryRecall metric values.
{ # TrajectoryRecall metric value for an instance.
"score": 3.14, # Output only. TrajectoryRecall score.
},
],
},
"trajectorySingleToolUseResults": { # Results for TrajectorySingleToolUse metric. # Results for trajectory single tool use metric.
"trajectorySingleToolUseMetricValues": [ # Output only. TrajectorySingleToolUse metric values.
{ # TrajectorySingleToolUse metric value for an instance.
"score": 3.14, # Output only. TrajectorySingleToolUse score.
},
],
},
}</pre>
</div>
<div class="method">
<code class="details" id="get">get(name, x__xgafv=None)</code>
<pre>Gets information about a location.
Args:
name: string, Resource name for the location. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A resource that represents a Google Cloud location.
"displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo".
"labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"}
"a_key": "A String",
},
"locationId": "A String", # The canonical id for this location. For example: `"us-east1"`.
"metadata": { # Service-specific metadata. For example the available capacity at the given location.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"`
}</pre>
</div>
<div class="method">
<code class="details" id="getRagEngineConfig">getRagEngineConfig(name, x__xgafv=None)</code>
<pre>Gets a RagEngineConfig.
Args:
name: string, Required. The name of the RagEngineConfig resource. Format: `projects/{project}/locations/{location}/ragEngineConfig` (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Config for RagEngine.
"name": "A String", # Identifier. The name of the RagEngineConfig. Format: `projects/{project}/locations/{location}/ragEngineConfig`
"ragManagedDbConfig": { # Configuration message for RagManagedDb used by RagEngine. # The config of the RagManagedDb used by RagEngine.
"basic": { # Basic tier is a cost-effective and low compute tier suitable for the following cases: * Experimenting with RagManagedDb. * Small data size. * Latency insensitive workload. * Only using RAG Engine with external vector DBs. NOTE: This is the default tier if not explicitly chosen. # Sets the RagManagedDb to the Basic tier.
},
"scaled": { # Scaled tier offers production grade performance along with autoscaling functionality. It is suitable for customers with large amounts of data or performance sensitive workloads. # Sets the RagManagedDb to the Scaled tier.
},
"unprovisioned": { # Disables the RAG Engine service and deletes all your data held within this service. This will halt the billing of the service. NOTE: Once deleted the data cannot be recovered. To start using RAG Engine again, you will need to update the tier by calling the UpdateRagEngineConfig API. # Sets the RagManagedDb to the Unprovisioned tier.
},
},
}</pre>
</div>
<div class="method">
<code class="details" id="list">list(name, extraLocationTypes=None, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</code>
<pre>Lists information about the supported locations for this service.
Args:
name: string, The resource that owns the locations collection, if applicable. (required)
extraLocationTypes: string, Optional. A list of extra location types that should be used as conditions for controlling the visibility of the locations. (repeated)
filter: string, A filter to narrow down results to a preferred subset. The filtering language accepts strings like `"displayName=tokyo"`, and is documented in more detail in [AIP-160](https://google.aip.dev/160).
pageSize: integer, The maximum number of results to return. If not set, the service selects a default.
pageToken: string, A page token received from the `next_page_token` field in the response. Send that page token to receive the subsequent page.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response message for Locations.ListLocations.
"locations": [ # A list of locations that matches the specified filter in the request.
{ # A resource that represents a Google Cloud location.
"displayName": "A String", # The friendly name for this location, typically a nearby city name. For example, "Tokyo".
"labels": { # Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"}
"a_key": "A String",
},
"locationId": "A String", # The canonical id for this location. For example: `"us-east1"`.
"metadata": { # Service-specific metadata. For example the available capacity at the given location.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # Resource name for the location, which may vary between implementations. For example: `"projects/example-project/locations/us-east1"`
},
],
"nextPageToken": "A String", # The standard List next-page token.
}</pre>
</div>
<div class="method">
<code class="details" id="list_next">list_next()</code>
<pre>Retrieves the next page of results.
Args:
previous_request: The request for the previous page. (required)
previous_response: The response from the request for the previous page. (required)
Returns:
A request object that you can call 'execute()' on to request the next
page. Returns None if there are no more items in the collection.
</pre>
</div>
<div class="method">
<code class="details" id="retrieveContexts">retrieveContexts(parent, body=None, x__xgafv=None)</code>
<pre>Retrieves relevant contexts for a query.
Args:
parent: string, Required. The resource name of the Location from which to retrieve RagContexts. The users must have permission to make a call in the project. Format: `projects/{project}/locations/{location}`. (required)
body: object, The request body.
The object takes the form of:
{ # Request message for VertexRagService.RetrieveContexts.
"query": { # A query to retrieve relevant contexts. # Required. Single RAG retrieve query.
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"text": "A String", # Optional. The query in text format to get relevant contexts.
},
"vertexRagStore": { # The data source for Vertex RagStore. # The data source for Vertex RagStore.
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"vectorDistanceThreshold": 3.14, # Optional. Only return contexts with vector distance smaller than the threshold.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for VertexRagService.RetrieveContexts.
"contexts": { # Relevant contexts for one query. # The contexts of the query.
"contexts": [ # All its contexts.
{ # A context of the query.
"chunk": { # A RagChunk includes the content of a chunk of a RagFile, and associated metadata. # Context of the retrieved chunk.
"pageSpan": { # Represents where the chunk starts and ends in the document. # If populated, represents where the chunk starts and ends in the document.
"firstPage": 42, # Page where chunk starts in the document. Inclusive. 1-indexed.
"lastPage": 42, # Page where chunk ends in the document. Inclusive. 1-indexed.
},
"text": "A String", # The content of the chunk.
},
"score": 3.14, # According to the underlying Vector DB and the selected metric type, the score can be either the distance or the similarity between the query and the context and its range depends on the metric type. For example, if the metric type is COSINE_DISTANCE, it represents the distance between the query and the context. The larger the distance, the less relevant the context is to the query. The range is [0, 2], while 0 means the most relevant and 2 means the least relevant.
"sourceDisplayName": "A String", # The file display name.
"sourceUri": "A String", # If the file is imported from Cloud Storage or Google Drive, source_uri will be original file URI in Cloud Storage or Google Drive; if file is uploaded, source_uri will be file display name.
"text": "A String", # The text chunk.
},
],
},
}</pre>
</div>
<div class="method">
<code class="details" id="updateRagEngineConfig">updateRagEngineConfig(name, body=None, x__xgafv=None)</code>
<pre>Updates a RagEngineConfig.
Args:
name: string, Identifier. The name of the RagEngineConfig. Format: `projects/{project}/locations/{location}/ragEngineConfig` (required)
body: object, The request body.
The object takes the form of:
{ # Config for RagEngine.
"name": "A String", # Identifier. The name of the RagEngineConfig. Format: `projects/{project}/locations/{location}/ragEngineConfig`
"ragManagedDbConfig": { # Configuration message for RagManagedDb used by RagEngine. # The config of the RagManagedDb used by RagEngine.
"basic": { # Basic tier is a cost-effective and low compute tier suitable for the following cases: * Experimenting with RagManagedDb. * Small data size. * Latency insensitive workload. * Only using RAG Engine with external vector DBs. NOTE: This is the default tier if not explicitly chosen. # Sets the RagManagedDb to the Basic tier.
},
"scaled": { # Scaled tier offers production grade performance along with autoscaling functionality. It is suitable for customers with large amounts of data or performance sensitive workloads. # Sets the RagManagedDb to the Scaled tier.
},
"unprovisioned": { # Disables the RAG Engine service and deletes all your data held within this service. This will halt the billing of the service. NOTE: Once deleted the data cannot be recovered. To start using RAG Engine again, you will need to update the tier by calling the UpdateRagEngineConfig API. # Sets the RagManagedDb to the Unprovisioned tier.
},
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
</body></html>
|