1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824
|
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="aiplatform_v1.html">Vertex AI API</a> . <a href="aiplatform_v1.projects.html">projects</a> . <a href="aiplatform_v1.projects.locations.html">locations</a> . <a href="aiplatform_v1.projects.locations.endpoints.html">endpoints</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.endpoints.chat.html">chat()</a></code>
</p>
<p class="firstline">Returns the chat Resource.</p>
<p class="toc_element">
<code><a href="aiplatform_v1.projects.locations.endpoints.operations.html">operations()</a></code>
</p>
<p class="firstline">Returns the operations Resource.</p>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
<code><a href="#computeTokens">computeTokens(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Return a list of tokens based on the input text.</p>
<p class="toc_element">
<code><a href="#countTokens">countTokens(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform a token counting.</p>
<p class="toc_element">
<code><a href="#create">create(parent, body=None, endpointId=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates an Endpoint.</p>
<p class="toc_element">
<code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
<p class="firstline">Deletes an Endpoint.</p>
<p class="toc_element">
<code><a href="#deployModel">deployModel(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Deploys a Model into this Endpoint, creating a DeployedModel within it.</p>
<p class="toc_element">
<code><a href="#directPredict">directPredict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.</p>
<p class="toc_element">
<code><a href="#directRawPredict">directRawPredict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform an unary online prediction request to a gRPC model server for custom containers.</p>
<p class="toc_element">
<code><a href="#explain">explain(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform an online explanation. If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated.</p>
<p class="toc_element">
<code><a href="#fetchPredictOperation">fetchPredictOperation(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Fetch an asynchronous online prediction operation.</p>
<p class="toc_element">
<code><a href="#generateContent">generateContent(model, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Generate content with multimodal inputs.</p>
<p class="toc_element">
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets an Endpoint.</p>
<p class="toc_element">
<code><a href="#list">list(parent, filter=None, gdcZone=None, orderBy=None, pageSize=None, pageToken=None, readMask=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists Endpoints in a Location.</p>
<p class="toc_element">
<code><a href="#list_next">list_next()</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
<code><a href="#mutateDeployedModel">mutateDeployedModel(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates an existing deployed model. Updatable fields include `min_replica_count`, `max_replica_count`, `required_replica_count`, `autoscaling_metric_specs`, `disable_container_logging` (v1 only), and `enable_container_logging` (v1beta1 only).</p>
<p class="toc_element">
<code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates an Endpoint.</p>
<p class="toc_element">
<code><a href="#predict">predict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform an online prediction.</p>
<p class="toc_element">
<code><a href="#predictLongRunning">predictLongRunning(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline"></p>
<p class="toc_element">
<code><a href="#rawPredict">rawPredict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform an online prediction with an arbitrary HTTP payload. The response includes the following HTTP headers: * `X-Vertex-AI-Endpoint-Id`: ID of the Endpoint that served this prediction. * `X-Vertex-AI-Deployed-Model-Id`: ID of the Endpoint's DeployedModel that served this prediction.</p>
<p class="toc_element">
<code><a href="#serverStreamingPredict">serverStreamingPredict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform a server-side streaming online prediction request for Vertex LLM streaming.</p>
<p class="toc_element">
<code><a href="#streamGenerateContent">streamGenerateContent(model, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Generate content with multimodal inputs with streaming support.</p>
<p class="toc_element">
<code><a href="#streamRawPredict">streamRawPredict(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Perform a streaming online prediction with an arbitrary HTTP payload.</p>
<p class="toc_element">
<code><a href="#undeployModel">undeployModel(endpoint, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Undeploys a Model from an Endpoint, removing a DeployedModel from it, and freeing all resources it's using.</p>
<p class="toc_element">
<code><a href="#update">update(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates an Endpoint with a long running operation.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
<div class="method">
<code class="details" id="computeTokens">computeTokens(endpoint, body=None, x__xgafv=None)</code>
<pre>Return a list of tokens based on the input text.
Args:
endpoint: string, Required. The name of the Endpoint requested to get lists of tokens and token ids. (required)
body: object, The request body.
The object takes the form of:
{ # Request message for ComputeTokens RPC call.
"contents": [ # Optional. Input content.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"instances": [ # Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.
"",
],
"model": "A String", # Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for ComputeTokens RPC call.
"tokensInfo": [ # Lists of tokens info from the input. A ComputeTokensRequest could have multiple instances with a prompt in each instance. We also need to return lists of tokens info for the request with multiple instances.
{ # Tokens info with a list of tokens and the corresponding list of token ids.
"role": "A String", # Optional. Optional fields for the role from the corresponding Content.
"tokenIds": [ # A list of token ids from the input.
"A String",
],
"tokens": [ # A list of tokens from the input.
"A String",
],
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="countTokens">countTokens(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform a token counting.
Args:
endpoint: string, Required. The name of the Endpoint requested to perform token counting. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.CountTokens.
"contents": [ # Optional. Input content.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"generationConfig": { # Generation config. # Optional. Generation config that the model will use to generate the response.
"audioTimestamp": True or False, # Optional. If enabled, audio timestamp will be included in the request to the model.
"candidateCount": 42, # Optional. Number of candidates to generate.
"enableAffectiveDialog": True or False, # Optional. If enabled, the model will detect emotions and adapt its responses accordingly.
"frequencyPenalty": 3.14, # Optional. Frequency penalties.
"logprobs": 42, # Optional. Logit probabilities.
"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message.
"mediaResolution": "A String", # Optional. If specified, the media resolution specified will be used.
"presencePenalty": 3.14, # Optional. Positive penalties.
"responseJsonSchema": "", # Optional. Output schema of the generated response. This is an alternative to `response_schema` that accepts [JSON Schema](https://json-schema.org/). If set, `response_schema` must be omitted, but `response_mime_type` is required. While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - `$id` - `$defs` - `$ref` - `$anchor` - `type` - `format` - `title` - `description` - `enum` (for strings and numbers) - `items` - `prefixItems` - `minItems` - `maxItems` - `minimum` - `maximum` - `anyOf` - `oneOf` (interpreted the same as `anyOf`) - `properties` - `additionalProperties` - `required` The non-standard `propertyOrdering` property may also be set. Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If `$ref` is set on a sub-schema, no other properties, except for than those starting as a `$`, may be set.
"responseLogprobs": True or False, # Optional. If true, export the logprobs results in response.
"responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
"responseModalities": [ # Optional. The modalities of the response.
"A String",
],
"responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"routingConfig": { # The configuration for routing the request to a specific model. # Optional. Routing configuration.
"autoMode": { # When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference. # Automated routing.
"modelRoutingPreference": "A String", # The model routing preference.
},
"manualMode": { # When manual routing is set, the specified model will be used directly. # Manual routing.
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
},
"seed": 42, # Optional. Seed.
"speechConfig": { # The speech generation config. # Optional. The speech generation config.
"languageCode": "A String", # Optional. Language code (ISO 639. e.g. en-US) for the speech synthesization.
"voiceConfig": { # The configuration for the voice to use. # The configuration for the speaker to use.
"prebuiltVoiceConfig": { # The configuration for the prebuilt speaker to use. # The configuration for the prebuilt voice to use.
"voiceName": "A String", # The name of the preset voice to use.
},
},
},
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
"temperature": 3.14, # Optional. Controls the randomness of predictions.
"thinkingConfig": { # Config for thinking features. # Optional. Config for thinking features. An error will be returned if this field is set for models that don't support thinking.
"includeThoughts": True or False, # Optional. Indicates whether to include thoughts in the response. If true, thoughts are returned only when available.
"thinkingBudget": 42, # Optional. Indicates the thinking budget in tokens.
},
"topK": 3.14, # Optional. If specified, top-k sampling will be used.
"topP": 3.14, # Optional. If specified, nucleus sampling will be used.
},
"instances": [ # Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.
"",
],
"model": "A String", # Optional. The name of the publisher model requested to serve the prediction. Format: `projects/{project}/locations/{location}/publishers/*/models/*`
"systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
"parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.CountTokens.
"promptTokensDetails": [ # Output only. List of modalities that were processed in the request input.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"totalBillableCharacters": 42, # The total number of billable characters counted across all instances from the request.
"totalTokens": 42, # The total number of tokens counted across all instances from the request.
}</pre>
</div>
<div class="method">
<code class="details" id="create">create(parent, body=None, endpointId=None, x__xgafv=None)</code>
<pre>Creates an Endpoint.
Args:
parent: string, Required. The resource name of the Location to create the Endpoint in. Format: `projects/{project}/locations/{location}` (required)
body: object, The request body.
The object takes the form of:
{ # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
}
endpointId: string, Immutable. The ID to use for endpoint, which will become the final component of the endpoint resource name. If not provided, Vertex AI will generate a value for this ID. If the first character is a letter, this value may be up to 63 characters, and valid characters are `[a-z0-9-]`. The last character must be a letter or number. If the first character is a number, this value may be up to 9 characters, and valid characters are `[0-9]` with no leading zeros. When using HTTP/JSON, this field is populated based on a query string argument, such as `?endpoint_id=12345`. This is the fallback for fields that are not included in either the URI or the body.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="delete">delete(name, x__xgafv=None)</code>
<pre>Deletes an Endpoint.
Args:
name: string, Required. The name of the Endpoint resource to be deleted. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="deployModel">deployModel(endpoint, body=None, x__xgafv=None)</code>
<pre>Deploys a Model into this Endpoint, creating a DeployedModel within it.
Args:
endpoint: string, Required. The name of the Endpoint resource into which to deploy a Model. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EndpointService.DeployModel.
"deployedModel": { # A deployment of a Model. Endpoints contain one or more DeployedModels. # Required. The DeployedModel to be created within the Endpoint. Note that Endpoint.traffic_split must be updated for the DeployedModel to start receiving traffic, either as part of this call, or via EndpointService.UpdateEndpoint.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If this field is non-empty, then the Endpoint's traffic_split will be overwritten with it. To refer to the ID of the just being deployed Model, a "0" should be used, and the actual ID of the new DeployedModel will be filled in its place by this method. The traffic percentage values must add up to 100. If this field is empty, then the Endpoint's traffic_split is not updated.
"a_key": 42,
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="directPredict">directPredict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.DirectPredict.
"inputs": [ # The prediction input.
{ # A tensor value type.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
],
"parameters": { # A tensor value type. # The parameters that govern the prediction.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.DirectPredict.
"outputs": [ # The prediction output.
{ # A tensor value type.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
],
"parameters": { # A tensor value type. # The parameters that govern the prediction.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
}</pre>
</div>
<div class="method">
<code class="details" id="directRawPredict">directRawPredict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform an unary online prediction request to a gRPC model server for custom containers.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.DirectRawPredict.
"input": "A String", # The prediction input.
"methodName": "A String", # Fully qualified name of the API method being invoked to perform predictions. Format: `/namespace.Service/Method/` Example: `/tensorflow.serving.PredictionService/Predict`
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.DirectRawPredict.
"output": "A String", # The prediction output.
}</pre>
</div>
<div class="method">
<code class="details" id="explain">explain(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform an online explanation. If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the explanation. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.Explain.
"deployedModelId": "A String", # If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
"explanationSpecOverride": { # The ExplanationSpec entries that can be overridden at online explanation time. # If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results.
"examplesOverride": { # Overrides for example-based explanations. # The example-based explanations parameter overrides.
"crowdingCount": 42, # The number of neighbors to return that have the same crowding tag.
"dataFormat": "A String", # The format of the data being provided with each call.
"neighborCount": 42, # The number of neighbors to return.
"restrictions": [ # Restrict the resulting nearest neighbors to respect these constraints.
{ # Restrictions namespace for example-based explanations overrides.
"allow": [ # The list of allowed tags.
"A String",
],
"deny": [ # The list of deny tags.
"A String",
],
"namespaceName": "A String", # The namespace name.
},
],
"returnEmbeddings": True or False, # If true, return the embeddings instead of neighbors.
},
"metadata": { # The ExplanationMetadata entries that can be overridden at online explanation time. # The metadata to be overridden. If not specified, no metadata is overridden.
"inputs": { # Required. Overrides the input metadata of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden.
"a_key": { # The input metadata entries to be overridden.
"inputBaselines": [ # Baseline inputs for this feature. This overrides the `input_baseline` field of the ExplanationMetadata.InputMetadata object of the corresponding feature's input metadata. If it's not specified, the original baselines are not overridden.
"",
],
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # The parameters to be overridden. Note that the attribution method cannot be changed. If not specified, no parameter is overridden.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"instances": [ # Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"parameters": "", # The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.Explain.
"deployedModelId": "A String", # ID of the Endpoint's DeployedModel that served this explanation.
"explanations": [ # The explanations of the Model's PredictResponse.predictions. It has the same number of elements as instances to be explained.
{ # Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.
"attributions": [ # Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of `0.4` for approving a loan application, the model's decision is to reject the application since `p(reject) = 0.6 > p(approve) = 0.4`, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
{ # Attribution that explains a particular prediction output.
"approximationError": 3.14, # Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. * For Sampled Shapley attribution, increasing path_count might reduce the error. * For Integrated Gradients attribution, increasing step_count might reduce the error. * For XRAI attribution, increasing step_count might reduce the error. See [this introduction](/vertex-ai/docs/explainable-ai/overview) for more information.
"baselineOutputValue": 3.14, # Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged.
"featureAttributions": "", # Output only. Attributions of each explained feature. Features are extracted from the prediction instances according to explanation metadata for inputs. The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance contributed to the predicted result. The format of the value is determined by the feature's input format: * If the feature is a scalar value, the attribution value is a floating number. * If the feature is an array of scalar values, the attribution value is an array. * If the feature is a struct, the attribution value is a struct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct. The ExplanationMetadata.feature_attributions_schema_uri field, pointed to by the ExplanationSpec field of the Endpoint.deployed_models object, points to the schema file that describes the features and their attribution values (if it is populated).
"instanceOutputValue": 3.14, # Output only. Model predicted output on the corresponding explanation instance. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model predicted output has multiple dimensions, this is the value in the output located by output_index.
"outputDisplayName": "A String", # Output only. The display name of the output identified by output_index. For example, the predicted class name by a multi-classification Model. This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index.
"outputIndex": [ # Output only. The index that locates the explained prediction output. If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0.
42,
],
"outputName": "A String", # Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs.
},
],
"neighbors": [ # Output only. List of the nearest neighbors for example-based explanations. For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
{ # Neighbors for example-based explanations.
"neighborDistance": 3.14, # Output only. The neighbor distance.
"neighborId": "A String", # Output only. The neighbor id.
},
],
},
],
"predictions": [ # The predictions that are the output of the predictions call. Same as PredictResponse.predictions.
"",
],
}</pre>
</div>
<div class="method">
<code class="details" id="fetchPredictOperation">fetchPredictOperation(endpoint, body=None, x__xgafv=None)</code>
<pre>Fetch an asynchronous online prediction operation.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` or `projects/{project}/locations/{location}/publishers/{publisher}/models/{model}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.FetchPredictOperation.
"operationName": "A String", # Required. The server-assigned name for the operation.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="generateContent">generateContent(model, body=None, x__xgafv=None)</code>
<pre>Generate content with multimodal inputs.
Args:
model: string, Required. The fully qualified name of the publisher model or tuned model endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Tuned model endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for [PredictionService.GenerateContent].
"cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}`
"contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"generationConfig": { # Generation config. # Optional. Generation config.
"audioTimestamp": True or False, # Optional. If enabled, audio timestamp will be included in the request to the model.
"candidateCount": 42, # Optional. Number of candidates to generate.
"enableAffectiveDialog": True or False, # Optional. If enabled, the model will detect emotions and adapt its responses accordingly.
"frequencyPenalty": 3.14, # Optional. Frequency penalties.
"logprobs": 42, # Optional. Logit probabilities.
"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message.
"mediaResolution": "A String", # Optional. If specified, the media resolution specified will be used.
"presencePenalty": 3.14, # Optional. Positive penalties.
"responseJsonSchema": "", # Optional. Output schema of the generated response. This is an alternative to `response_schema` that accepts [JSON Schema](https://json-schema.org/). If set, `response_schema` must be omitted, but `response_mime_type` is required. While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - `$id` - `$defs` - `$ref` - `$anchor` - `type` - `format` - `title` - `description` - `enum` (for strings and numbers) - `items` - `prefixItems` - `minItems` - `maxItems` - `minimum` - `maximum` - `anyOf` - `oneOf` (interpreted the same as `anyOf`) - `properties` - `additionalProperties` - `required` The non-standard `propertyOrdering` property may also be set. Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If `$ref` is set on a sub-schema, no other properties, except for than those starting as a `$`, may be set.
"responseLogprobs": True or False, # Optional. If true, export the logprobs results in response.
"responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
"responseModalities": [ # Optional. The modalities of the response.
"A String",
],
"responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"routingConfig": { # The configuration for routing the request to a specific model. # Optional. Routing configuration.
"autoMode": { # When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference. # Automated routing.
"modelRoutingPreference": "A String", # The model routing preference.
},
"manualMode": { # When manual routing is set, the specified model will be used directly. # Manual routing.
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
},
"seed": 42, # Optional. Seed.
"speechConfig": { # The speech generation config. # Optional. The speech generation config.
"languageCode": "A String", # Optional. Language code (ISO 639. e.g. en-US) for the speech synthesization.
"voiceConfig": { # The configuration for the voice to use. # The configuration for the speaker to use.
"prebuiltVoiceConfig": { # The configuration for the prebuilt speaker to use. # The configuration for the prebuilt voice to use.
"voiceName": "A String", # The name of the preset voice to use.
},
},
},
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
"temperature": 3.14, # Optional. Controls the randomness of predictions.
"thinkingConfig": { # Config for thinking features. # Optional. Config for thinking features. An error will be returned if this field is set for models that don't support thinking.
"includeThoughts": True or False, # Optional. Indicates whether to include thoughts in the response. If true, thoughts are returned only when available.
"thinkingBudget": 42, # Optional. Indicates the thinking budget in tokens.
},
"topK": 3.14, # Optional. If specified, top-k sampling will be used.
"topP": 3.14, # Optional. If specified, nucleus sampling will be used.
},
"labels": { # Optional. The labels with user-defined metadata for the request. It is used for billing and reporting only. Label keys and values can be no longer than 63 characters (Unicode codepoints) and can only contain lowercase letters, numeric characters, underscores, and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.
"a_key": "A String",
},
"modelArmorConfig": { # Configuration for Model Armor integrations of prompt and responses. # Optional. Settings for prompt and response sanitization using the Model Armor service. If supplied, safety_settings must not be supplied.
"promptTemplateName": "A String", # Optional. The name of the Model Armor template to use for prompt sanitization.
"responseTemplateName": "A String", # Optional. The name of the Model Armor template to use for response sanitization.
},
"safetySettings": [ # Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.
{ # Safety settings.
"category": "A String", # Required. Harm category.
"method": "A String", # Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.
"threshold": "A String", # Required. The harm block threshold.
},
],
"systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request.
"functionCallingConfig": { # Function calling config. # Optional. Function calling config.
"allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
"A String",
],
"mode": "A String", # Optional. Function calling mode.
},
"retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
"languageCode": "A String", # The language code of the user.
"latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
"latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
"longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
},
},
},
"tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
"parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
"avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
"endIndex": 42, # Output only. End index into the content.
"license": "A String", # Output only. License of the attribution.
"publicationDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Output only. Publication date of the attribution.
"day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
"month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
"year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
},
"startIndex": 42, # Output only. Start index into the content.
"title": "A String", # Output only. Title of the attribution.
"uri": "A String", # Output only. Url reference of the attribution.
},
],
},
"content": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Output only. Content parts of the candidate.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"finishMessage": "A String", # Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when `finish_reason` is set.
"finishReason": "A String", # Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
"groundingMetadata": { # Metadata returned to client when grounding is enabled. # Output only. Metadata specifies sources used to ground generated content.
"googleMapsWidgetContextToken": "A String", # Optional. Output only. Resource name of the Google Maps widget context token to be used with the PlacesContextElement widget to render contextual data. This is populated only for Google Maps grounding.
"groundingChunks": [ # List of supporting references retrieved from specified grounding source.
{ # Grounding chunk.
"maps": { # Chunk from Google Maps. # Grounding chunk from Google Maps.
"placeAnswerSources": { # Sources used to generate the place answer. # Sources used to generate the place answer. This includes review snippets and photos that were used to generate the answer, as well as uris to flag content.
"flagContentUri": "A String", # A link where users can flag a problem with the generated answer.
"reviewSnippets": [ # Snippets of reviews that are used to generate the answer.
{ # Encapsulates a review snippet.
"authorAttribution": { # Author attribution for a photo or review. # This review's author.
"displayName": "A String", # Name of the author of the Photo or Review.
"photoUri": "A String", # Profile photo URI of the author of the Photo or Review.
"uri": "A String", # URI of the author of the Photo or Review.
},
"flagContentUri": "A String", # A link where users can flag a problem with the review.
"googleMapsUri": "A String", # A link to show the review on Google Maps.
"relativePublishTimeDescription": "A String", # A string of formatted recent time, expressing the review time relative to the current time in a form appropriate for the language and country.
"review": "A String", # A reference representing this place review which may be used to look up this place review again.
},
],
},
"placeId": "A String", # This Place's resource name, in `places/{place_id}` format. Can be used to look up the Place.
"text": "A String", # Text of the chunk.
"title": "A String", # Title of the chunk.
"uri": "A String", # URI reference of the chunk.
},
"retrievedContext": { # Chunk from context retrieved by the retrieval tools. # Grounding chunk from context retrieved by the retrieval tools.
"documentName": "A String", # Output only. The full document name for the referenced Vertex AI Search document.
"ragChunk": { # A RagChunk includes the content of a chunk of a RagFile, and associated metadata. # Additional context for the RAG retrieval result. This is only populated when using the RAG retrieval tool.
"pageSpan": { # Represents where the chunk starts and ends in the document. # If populated, represents where the chunk starts and ends in the document.
"firstPage": 42, # Page where chunk starts in the document. Inclusive. 1-indexed.
"lastPage": 42, # Page where chunk ends in the document. Inclusive. 1-indexed.
},
"text": "A String", # The content of the chunk.
},
"text": "A String", # Text of the attribution.
"title": "A String", # Title of the attribution.
"uri": "A String", # URI reference of the attribution.
},
"web": { # Chunk from the web. # Grounding chunk from the web.
"domain": "A String", # Domain of the (original) URI.
"title": "A String", # Title of the chunk.
"uri": "A String", # URI reference of the chunk.
},
},
],
"groundingSupports": [ # Optional. List of grounding support.
{ # Grounding support.
"confidenceScores": [ # Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. For Gemini 2.0 and before, this list must have the same size as the grounding_chunk_indices. For Gemini 2.5 and after, this list will be empty and should be ignored.
3.14,
],
"groundingChunkIndices": [ # A list of indices (into 'grounding_chunk') specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.
42,
],
"segment": { # Segment of the content. # Segment of the content this support belongs to.
"endIndex": 42, # Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
"partIndex": 42, # Output only. The index of a Part object within its parent Content object.
"startIndex": 42, # Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
"text": "A String", # Output only. The text corresponding to the segment from the response.
},
},
],
"retrievalMetadata": { # Metadata related to retrieval in the grounding flow. # Optional. Output only. Retrieval metadata.
"googleSearchDynamicRetrievalScore": 3.14, # Optional. Score indicating how likely information from Google Search could help answer the prompt. The score is in the range `[0, 1]`, where 0 is the least likely and 1 is the most likely. This score is only populated when Google Search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger Google Search.
},
"searchEntryPoint": { # Google search entry point. # Optional. Google search entry for the following-up web searches.
"renderedContent": "A String", # Optional. Web content snippet that can be embedded in a web page or an app webview.
"sdkBlob": "A String", # Optional. Base64 encoded JSON representing array of tuple.
},
"webSearchQueries": [ # Optional. Web search queries for the following-up web search.
"A String",
],
},
"index": 42, # Output only. Index of the candidate.
"logprobsResult": { # Logprobs Result # Output only. Log-likelihood scores for the response tokens and top tokens
"chosenCandidates": [ # Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
{ # Candidate for the logprobs token and score.
"logProbability": 3.14, # The candidate's log probability.
"token": "A String", # The candidate's token string value.
"tokenId": 42, # The candidate's token id value.
},
],
"topCandidates": [ # Length = total number of decoding steps.
{ # Candidates with top log probabilities at each decoding step.
"candidates": [ # Sorted by log probability in descending order.
{ # Candidate for the logprobs token and score.
"logProbability": 3.14, # The candidate's log probability.
"token": "A String", # The candidate's token string value.
"tokenId": 42, # The candidate's token id value.
},
],
},
],
},
"safetyRatings": [ # Output only. List of ratings for the safety of a response candidate. There is at most one rating per category.
{ # Safety rating corresponding to the generated content.
"blocked": True or False, # Output only. Indicates whether the content was filtered out because of this rating.
"category": "A String", # Output only. Harm category.
"overwrittenThreshold": "A String", # Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
"probability": "A String", # Output only. Harm probability levels in the content.
"probabilityScore": 3.14, # Output only. Harm probability score.
"severity": "A String", # Output only. Harm severity levels in the content.
"severityScore": 3.14, # Output only. Harm severity score.
},
],
"urlContextMetadata": { # Metadata related to url context retrieval tool. # Output only. Metadata related to url context retrieval tool.
"urlMetadata": [ # Output only. List of url context.
{ # Context of the a single url retrieval.
"retrievedUrl": "A String", # Retrieved url by the tool.
"urlRetrievalStatus": "A String", # Status of the url retrieval.
},
],
},
},
],
"createTime": "A String", # Output only. Timestamp when the request is made to the server.
"modelVersion": "A String", # Output only. The model version used to generate the response.
"promptFeedback": { # Content filter results for a prompt sent in the request. # Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.
"blockReason": "A String", # Output only. Blocked reason.
"blockReasonMessage": "A String", # Output only. A readable block reason message.
"safetyRatings": [ # Output only. Safety ratings.
{ # Safety rating corresponding to the generated content.
"blocked": True or False, # Output only. Indicates whether the content was filtered out because of this rating.
"category": "A String", # Output only. Harm category.
"overwrittenThreshold": "A String", # Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
"probability": "A String", # Output only. Harm probability levels in the content.
"probabilityScore": 3.14, # Output only. Harm probability score.
"severity": "A String", # Output only. Harm severity levels in the content.
"severityScore": 3.14, # Output only. Harm severity score.
},
],
},
"responseId": "A String", # Output only. response_id is used to identify each response. It is the encoding of the event_id.
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"cacheTokensDetails": [ # Output only. List of modalities of the cached content in the request input.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"cachedContentTokenCount": 42, # Output only. Number of tokens in the cached part in the input (the cached content).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
"candidatesTokensDetails": [ # Output only. List of modalities that were returned in the response.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"promptTokensDetails": [ # Output only. List of modalities that were processed in the request input.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"thoughtsTokenCount": 42, # Output only. Number of tokens present in thoughts output.
"toolUsePromptTokenCount": 42, # Output only. Number of tokens present in tool-use prompt(s).
"toolUsePromptTokensDetails": [ # Output only. List of modalities that were processed for tool-use request inputs.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"totalTokenCount": 42, # Total token count for prompt, response candidates, and tool-use prompts (if present).
"trafficType": "A String", # Output only. Traffic type. This shows whether a request consumes Pay-As-You-Go or Provisioned Throughput quota.
},
}</pre>
</div>
<div class="method">
<code class="details" id="get">get(name, x__xgafv=None)</code>
<pre>Gets an Endpoint.
Args:
name: string, Required. The name of the Endpoint resource. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
}</pre>
</div>
<div class="method">
<code class="details" id="list">list(parent, filter=None, gdcZone=None, orderBy=None, pageSize=None, pageToken=None, readMask=None, x__xgafv=None)</code>
<pre>Lists Endpoints in a Location.
Args:
parent: string, Required. The resource name of the Location from which to list the Endpoints. Format: `projects/{project}/locations/{location}` (required)
filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports `=` and `!=`. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports `=` and `!=`. * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:*` or `labels:key` - key existence * A key including a space must be quoted. `labels."a key"`. * `base_model_name` only supports `=`. Some examples: * `endpoint=1` * `displayName="myDisplayName"` * `labels.myKey="myValue"` * `baseModelName="text-bison"`
gdcZone: string, Optional. Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
orderBy: string, A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: * `display_name` * `create_time` * `update_time` Example: `display_name, create_time desc`.
pageSize: integer, Optional. The standard list page size.
pageToken: string, Optional. The standard list page token. Typically obtained via ListEndpointsResponse.next_page_token of the previous EndpointService.ListEndpoints call.
readMask: string, Optional. Mask specifying which fields to read.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for EndpointService.ListEndpoints.
"endpoints": [ # List of Endpoints in the requested page.
{ # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
},
],
"nextPageToken": "A String", # A token to retrieve the next page of results. Pass to ListEndpointsRequest.page_token to obtain that page.
}</pre>
</div>
<div class="method">
<code class="details" id="list_next">list_next()</code>
<pre>Retrieves the next page of results.
Args:
previous_request: The request for the previous page. (required)
previous_response: The response from the request for the previous page. (required)
Returns:
A request object that you can call 'execute()' on to request the next
page. Returns None if there are no more items in the collection.
</pre>
</div>
<div class="method">
<code class="details" id="mutateDeployedModel">mutateDeployedModel(endpoint, body=None, x__xgafv=None)</code>
<pre>Updates an existing deployed model. Updatable fields include `min_replica_count`, `max_replica_count`, `required_replica_count`, `autoscaling_metric_specs`, `disable_container_logging` (v1 only), and `enable_container_logging` (v1beta1 only).
Args:
endpoint: string, Required. The name of the Endpoint resource into which to mutate a DeployedModel. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EndpointService.MutateDeployedModel.
"deployedModel": { # A deployment of a Model. Endpoints contain one or more DeployedModels. # Required. The DeployedModel to be mutated within the Endpoint. Only the following fields can be mutated: * `min_replica_count` in either DedicatedResources or AutomaticResources * `max_replica_count` in either DedicatedResources or AutomaticResources * `required_replica_count` in DedicatedResources * autoscaling_metric_specs * `disable_container_logging` (v1 only) * `enable_container_logging` (v1beta1 only)
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
"updateMask": "A String", # Required. The update mask applies to the resource. See google.protobuf.FieldMask.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
<pre>Updates an Endpoint.
Args:
name: string, Output only. The resource name of the Endpoint. (required)
body: object, The request body.
The object takes the form of:
{ # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
}
updateMask: string, Required. The update mask applies to the resource. See google.protobuf.FieldMask.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
}</pre>
</div>
<div class="method">
<code class="details" id="predict">predict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform an online prediction.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.Predict.
"instances": [ # Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"parameters": "", # The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.Predict.
"deployedModelId": "A String", # ID of the Endpoint's DeployedModel that served this prediction.
"metadata": "", # Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation.
"model": "A String", # Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits.
"modelDisplayName": "A String", # Output only. The display name of the Model which is deployed as the DeployedModel that this prediction hits.
"modelVersionId": "A String", # Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits.
"predictions": [ # The predictions that are the output of the predictions call. The schema of any single prediction may be specified via Endpoint's DeployedModels' Model's PredictSchemata's prediction_schema_uri.
"",
],
}</pre>
</div>
<div class="method">
<code class="details" id="predictLongRunning">predictLongRunning(endpoint, body=None, x__xgafv=None)</code>
<pre>
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` or `projects/{project}/locations/{location}/publishers/{publisher}/models/{model}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.PredictLongRunning.
"instances": [ # Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"parameters": "", # Optional. The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="rawPredict">rawPredict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform an online prediction with an arbitrary HTTP payload. The response includes the following HTTP headers: * `X-Vertex-AI-Endpoint-Id`: ID of the Endpoint that served this prediction. * `X-Vertex-AI-Deployed-Model-Id`: ID of the Endpoint's DeployedModel that served this prediction.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.RawPredict.
"httpBody": { # Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged. # The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model. You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the `Model` as a `DeployedModel` to an Endpoint and use the `RawPredict` method.
"contentType": "A String", # The HTTP Content-Type header value specifying the content type of the body.
"data": "A String", # The HTTP request/response body as raw binary.
"extensions": [ # Application specific response metadata. Must be set in the first response for streaming APIs.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.
"contentType": "A String", # The HTTP Content-Type header value specifying the content type of the body.
"data": "A String", # The HTTP request/response body as raw binary.
"extensions": [ # Application specific response metadata. Must be set in the first response for streaming APIs.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="serverStreamingPredict">serverStreamingPredict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform a server-side streaming online prediction request for Vertex LLM streaming.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.StreamingPredict. The first message must contain endpoint field and optionally input. The subsequent messages must contain input.
"inputs": [ # The prediction input.
{ # A tensor value type.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
],
"parameters": { # A tensor value type. # The parameters that govern the prediction.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for PredictionService.StreamingPredict.
"outputs": [ # The prediction output.
{ # A tensor value type.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
],
"parameters": { # A tensor value type. # The parameters that govern the prediction.
"boolVal": [ # Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL
True or False,
],
"bytesVal": [ # STRING
"A String",
],
"doubleVal": [ # DOUBLE
3.14,
],
"dtype": "A String", # The data type of tensor.
"floatVal": [ # FLOAT
3.14,
],
"int64Val": [ # INT64
"A String",
],
"intVal": [ # INT_8 INT_16 INT_32
42,
],
"listVal": [ # A list of tensor values.
# Object with schema name: GoogleCloudAiplatformV1Tensor
],
"shape": [ # Shape of the tensor.
"A String",
],
"stringVal": [ # STRING
"A String",
],
"structVal": { # A map of string to tensor.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Tensor
},
"tensorVal": "A String", # Serialized raw tensor content.
"uint64Val": [ # UINT64
"A String",
],
"uintVal": [ # UINT8 UINT16 UINT32
42,
],
},
}</pre>
</div>
<div class="method">
<code class="details" id="streamGenerateContent">streamGenerateContent(model, body=None, x__xgafv=None)</code>
<pre>Generate content with multimodal inputs with streaming support.
Args:
model: string, Required. The fully qualified name of the publisher model or tuned model endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Tuned model endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for [PredictionService.GenerateContent].
"cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}`
"contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
{ # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
],
"generationConfig": { # Generation config. # Optional. Generation config.
"audioTimestamp": True or False, # Optional. If enabled, audio timestamp will be included in the request to the model.
"candidateCount": 42, # Optional. Number of candidates to generate.
"enableAffectiveDialog": True or False, # Optional. If enabled, the model will detect emotions and adapt its responses accordingly.
"frequencyPenalty": 3.14, # Optional. Frequency penalties.
"logprobs": 42, # Optional. Logit probabilities.
"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message.
"mediaResolution": "A String", # Optional. If specified, the media resolution specified will be used.
"presencePenalty": 3.14, # Optional. Positive penalties.
"responseJsonSchema": "", # Optional. Output schema of the generated response. This is an alternative to `response_schema` that accepts [JSON Schema](https://json-schema.org/). If set, `response_schema` must be omitted, but `response_mime_type` is required. While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - `$id` - `$defs` - `$ref` - `$anchor` - `type` - `format` - `title` - `description` - `enum` (for strings and numbers) - `items` - `prefixItems` - `minItems` - `maxItems` - `minimum` - `maximum` - `anyOf` - `oneOf` (interpreted the same as `anyOf`) - `properties` - `additionalProperties` - `required` The non-standard `propertyOrdering` property may also be set. Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If `$ref` is set on a sub-schema, no other properties, except for than those starting as a `$`, may be set.
"responseLogprobs": True or False, # Optional. If true, export the logprobs results in response.
"responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
"responseModalities": [ # Optional. The modalities of the response.
"A String",
],
"responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"routingConfig": { # The configuration for routing the request to a specific model. # Optional. Routing configuration.
"autoMode": { # When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference. # Automated routing.
"modelRoutingPreference": "A String", # The model routing preference.
},
"manualMode": { # When manual routing is set, the specified model will be used directly. # Manual routing.
"modelName": "A String", # The model name to use. Only the public LLM models are accepted. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
},
"seed": 42, # Optional. Seed.
"speechConfig": { # The speech generation config. # Optional. The speech generation config.
"languageCode": "A String", # Optional. Language code (ISO 639. e.g. en-US) for the speech synthesization.
"voiceConfig": { # The configuration for the voice to use. # The configuration for the speaker to use.
"prebuiltVoiceConfig": { # The configuration for the prebuilt speaker to use. # The configuration for the prebuilt voice to use.
"voiceName": "A String", # The name of the preset voice to use.
},
},
},
"stopSequences": [ # Optional. Stop sequences.
"A String",
],
"temperature": 3.14, # Optional. Controls the randomness of predictions.
"thinkingConfig": { # Config for thinking features. # Optional. Config for thinking features. An error will be returned if this field is set for models that don't support thinking.
"includeThoughts": True or False, # Optional. Indicates whether to include thoughts in the response. If true, thoughts are returned only when available.
"thinkingBudget": 42, # Optional. Indicates the thinking budget in tokens.
},
"topK": 3.14, # Optional. If specified, top-k sampling will be used.
"topP": 3.14, # Optional. If specified, nucleus sampling will be used.
},
"labels": { # Optional. The labels with user-defined metadata for the request. It is used for billing and reporting only. Label keys and values can be no longer than 63 characters (Unicode codepoints) and can only contain lowercase letters, numeric characters, underscores, and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.
"a_key": "A String",
},
"modelArmorConfig": { # Configuration for Model Armor integrations of prompt and responses. # Optional. Settings for prompt and response sanitization using the Model Armor service. If supplied, safety_settings must not be supplied.
"promptTemplateName": "A String", # Optional. The name of the Model Armor template to use for prompt sanitization.
"responseTemplateName": "A String", # Optional. The name of the Model Armor template to use for response sanitization.
},
"safetySettings": [ # Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.
{ # Safety settings.
"category": "A String", # Required. Harm category.
"method": "A String", # Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.
"threshold": "A String", # Required. The harm block threshold.
},
],
"systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request.
"functionCallingConfig": { # Function calling config. # Optional. Function calling config.
"allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
"A String",
],
"mode": "A String", # Optional. Function calling mode.
},
"retrievalConfig": { # Retrieval config. # Optional. Retrieval config.
"languageCode": "A String", # The language code of the user.
"latLng": { # An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges. # The location of the user.
"latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
"longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
},
},
},
"tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
{ # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
"codeExecution": { # Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool. # Optional. CodeExecution tool type. Enables the model to execute code as part of generation.
},
"enterpriseWebSearch": { # Tool to search public web data, powered by Vertex AI Search and Sec4 compliance. # Optional. Tool to support searching public web data, powered by Vertex AI Search and Sec4 compliance.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains.
"A String",
],
},
"functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 512 function declarations can be provided.
{ # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
"name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
"parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"parametersJsonSchema": "", # Optional. Describes the parameters to the function in JSON Schema format. The schema must describe an object where the properties are the parameters to the function. For example: ``` { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" } }, "additionalProperties": false, "required": ["name", "age"], "propertyOrdering": ["name", "age"] } ``` This field is mutually exclusive with `parameters`.
"response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
"additionalProperties": "", # Optional. Can either be a boolean or an object; controls the presence of additional properties.
"anyOf": [ # Optional. The value should be validated against any (one or more) of the subschemas in the list.
# Object with schema name: GoogleCloudAiplatformV1Schema
],
"default": "", # Optional. Default value of the data.
"defs": { # Optional. A map of definitions for use by `ref` Only allowed at the root of the schema.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"description": "A String", # Optional. The description of the data.
"enum": [ # Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}
"A String",
],
"example": "", # Optional. Example of the object. Will only populated when the object is the root.
"format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
"items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
"maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
"maxLength": "A String", # Optional. Maximum length of the Type.STRING
"maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
"maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
"minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
"minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
"minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
"minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
"nullable": True or False, # Optional. Indicates if the value may be null.
"pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
"properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
"a_key": # Object with schema name: GoogleCloudAiplatformV1Schema
},
"propertyOrdering": [ # Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.
"A String",
],
"ref": "A String", # Optional. Allows indirect references between schema nodes. The value should be a valid reference to a child of the root `defs`. For example, the following schema defines a reference to a schema node named "Pet": type: object properties: pet: ref: #/defs/Pet defs: Pet: type: object properties: name: type: string The value of the "pet" property is a reference to the schema node named "Pet". See details in https://json-schema.org/understanding-json-schema/structuring
"required": [ # Optional. Required properties of Type.OBJECT.
"A String",
],
"title": "A String", # Optional. The title of the Schema.
"type": "A String", # Optional. The type of the data.
},
"responseJsonSchema": "", # Optional. Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. This field is mutually exclusive with `response`.
},
],
"googleMaps": { # Tool to retrieve public maps data for grounding, powered by Google. # Optional. GoogleMaps tool type. Tool to support Google Maps in Model.
},
"googleSearch": { # GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google. # Optional. GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.
"excludeDomains": [ # Optional. List of domains to be excluded from the search results. The default limit is 2000 domains. Example: ["amazon.com", "facebook.com"].
"A String",
],
},
"googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
"dynamicRetrievalConfig": { # Describes the options to customize dynamic retrieval. # Specifies the dynamic retrieval configuration for the given source.
"dynamicThreshold": 3.14, # Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.
"mode": "A String", # The mode of the predictor to be used in dynamic retrieval.
},
},
"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
"disableAttribution": True or False, # Optional. Deprecated. This option is no longer supported.
"externalApi": { # Retrieve from data source powered by external API for grounding. The external API is not owned by Google, but need to follow the pre-defined API spec. # Use data source powered by external API for grounding.
"apiAuth": { # The generic reusable api auth config. Deprecated. Please use AuthConfig (google/cloud/aiplatform/master/auth.proto) instead. # The authentication config to access the API. Deprecated. Please use auth_config instead.
"apiKeyConfig": { # The API secret. # The API secret.
"apiKeySecretVersion": "A String", # Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}
"apiKeyString": "A String", # The API key string. Either this or `api_key_secret_version` must be set.
},
},
"apiSpec": "A String", # The API spec that the external API implements.
"authConfig": { # Auth configuration to run the extension. # The authentication config to access the API.
"apiKeyConfig": { # Config for authentication with API key. # Config for API key auth.
"apiKeySecret": "A String", # Optional. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If both `api_key_secret` and `api_key_string` are specified, this field takes precedence over `api_key_string`. - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
"apiKeyString": "A String", # Optional. The API key to be used in the request directly.
"httpElementLocation": "A String", # Optional. The location of the API key.
"name": "A String", # Optional. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.
},
"authType": "A String", # Type of auth scheme.
"googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth.
"serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.
},
"httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth.
"credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.
},
"oauthConfig": { # Config for user oauth. # Config for user oauth.
"accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.
},
"oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth.
"idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.
"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
},
},
"elasticSearchParams": { # The search parameters to use for the ELASTIC_SEARCH spec. # Parameters for the elastic search API.
"index": "A String", # The ElasticSearch index to use.
"numHits": 42, # Optional. Number of hits (chunks) to request. When specified, it is passed to Elasticsearch as the `num_hits` param.
"searchTemplate": "A String", # The ElasticSearch search template to use.
},
"endpoint": "A String", # The endpoint of the external API. The system will call the API at this endpoint to retrieve the data for grounding. Example: https://acme.com:443/search
"simpleSearchParams": { # The search parameters to use for SIMPLE_SEARCH spec. # Parameters for the simple search API.
},
},
"vertexAiSearch": { # Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder # Set to use data source powered by Vertex AI Search.
"dataStoreSpecs": [ # Specifications that define the specific DataStores to be searched, along with configurations for those data stores. This is only considered for Engines with multiple data stores. It should only be set if engine is used.
{ # Define data stores within engine to filter on in a search call and configurations for those data stores. For more information, see https://cloud.google.com/generative-ai-app-builder/docs/reference/rpc/google.cloud.discoveryengine.v1#datastorespec
"dataStore": "A String", # Full resource name of DataStore, such as Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"filter": "A String", # Optional. Filter specification to filter documents in the data store specified by data_store field. For more information on filtering, see [Filtering](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)
},
],
"datastore": "A String", # Optional. Fully-qualified Vertex AI Search data store resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
"engine": "A String", # Optional. Fully-qualified Vertex AI Search engine resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`
"filter": "A String", # Optional. Filter strings to be passed to the search API.
"maxResults": 42, # Optional. Number of search results to return per query. The default value is 10. The maximumm allowed value is 10.
},
"vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
"ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
{ # The definition of the Rag resource.
"ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
"ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
"A String",
],
},
],
"ragRetrievalConfig": { # Specifies the context retrieval config. # Optional. The retrieval config for the Rag query.
"filter": { # Config for filters. # Optional. Config for filters.
"metadataFilter": "A String", # Optional. String for metadata filtering.
"vectorDistanceThreshold": 3.14, # Optional. Only returns contexts with vector distance smaller than the threshold.
"vectorSimilarityThreshold": 3.14, # Optional. Only returns contexts with vector similarity larger than the threshold.
},
"ranking": { # Config for ranking and reranking. # Optional. Config for ranking and reranking.
"llmRanker": { # Config for LlmRanker. # Optional. Config for LlmRanker.
"modelName": "A String", # Optional. The model name used for ranking. See [Supported models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#supported-models).
},
"rankService": { # Config for Rank Service. # Optional. Config for Rank Service.
"modelName": "A String", # Optional. The model name of the rank service. Format: `semantic-ranker-512@latest`
},
},
"topK": 42, # Optional. The number of contexts to retrieve.
},
"similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
"vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
},
},
"urlContext": { # Tool to support URL context. # Optional. Tool to support URL context retrieval.
},
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for [PredictionService.GenerateContent].
"candidates": [ # Output only. Generated candidates.
{ # A response candidate generated from the model.
"avgLogprobs": 3.14, # Output only. Average log probability score of the candidate.
"citationMetadata": { # A collection of source attributions for a piece of content. # Output only. Source attribution of the generated content.
"citations": [ # Output only. List of citations.
{ # Source attributions for content.
"endIndex": 42, # Output only. End index into the content.
"license": "A String", # Output only. License of the attribution.
"publicationDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Output only. Publication date of the attribution.
"day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
"month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
"year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
},
"startIndex": 42, # Output only. Start index into the content.
"title": "A String", # Output only. Title of the attribution.
"uri": "A String", # Output only. Url reference of the attribution.
},
],
},
"content": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Output only. Content parts of the candidate.
"parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
{ # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
"codeExecutionResult": { # Result of executing the [ExecutableCode]. Only generated when using the [CodeExecution] tool, and always follows a `part` containing the [ExecutableCode]. # Optional. Result of executing the [ExecutableCode].
"outcome": "A String", # Required. Outcome of the code execution.
"output": "A String", # Optional. Contains stdout when code execution is successful, stderr or other description otherwise.
},
"executableCode": { # Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [CodeExecution] tool, in which the code will be automatically executed, and a corresponding [CodeExecutionResult] will also be generated. # Optional. Code generated by the model that is meant to be executed.
"code": "A String", # Required. The code to be executed.
"language": "A String", # Required. Programming language of the `code`.
},
"fileData": { # URI based data. # Optional. URI based data.
"displayName": "A String", # Optional. Display name of the file data. Used to provide a label or filename to distinguish file datas. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"fileUri": "A String", # Required. URI.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
"args": { # Optional. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
"a_key": "", # Properties of the object.
},
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
},
"functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
"name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
"response": { # Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.
"a_key": "", # Properties of the object.
},
},
"inlineData": { # Content blob. # Optional. Inlined bytes data.
"data": "A String", # Required. Raw bytes.
"displayName": "A String", # Optional. Display name of the blob. Used to provide a label or filename to distinguish blobs. This field is only returned in PromptMessage for prompt management. It is currently used in the Gemini GenerateContent calls only when server side tools (code_execution, google_search, and url_context) are enabled.
"mimeType": "A String", # Required. The IANA standard MIME type of the source data.
},
"text": "A String", # Optional. Text part (can be code).
"thought": True or False, # Optional. Indicates if the part is thought from the model.
"thoughtSignature": "A String", # Optional. An opaque signature for the thought so it can be reused in subsequent requests.
"videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
"endOffset": "A String", # Optional. The end offset of the video.
"fps": 3.14, # Optional. The frame rate of the video sent to the model. If not specified, the default value will be 1.0. The fps range is (0.0, 24.0].
"startOffset": "A String", # Optional. The start offset of the video.
},
},
],
"role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
},
"finishMessage": "A String", # Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when `finish_reason` is set.
"finishReason": "A String", # Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
"groundingMetadata": { # Metadata returned to client when grounding is enabled. # Output only. Metadata specifies sources used to ground generated content.
"googleMapsWidgetContextToken": "A String", # Optional. Output only. Resource name of the Google Maps widget context token to be used with the PlacesContextElement widget to render contextual data. This is populated only for Google Maps grounding.
"groundingChunks": [ # List of supporting references retrieved from specified grounding source.
{ # Grounding chunk.
"maps": { # Chunk from Google Maps. # Grounding chunk from Google Maps.
"placeAnswerSources": { # Sources used to generate the place answer. # Sources used to generate the place answer. This includes review snippets and photos that were used to generate the answer, as well as uris to flag content.
"flagContentUri": "A String", # A link where users can flag a problem with the generated answer.
"reviewSnippets": [ # Snippets of reviews that are used to generate the answer.
{ # Encapsulates a review snippet.
"authorAttribution": { # Author attribution for a photo or review. # This review's author.
"displayName": "A String", # Name of the author of the Photo or Review.
"photoUri": "A String", # Profile photo URI of the author of the Photo or Review.
"uri": "A String", # URI of the author of the Photo or Review.
},
"flagContentUri": "A String", # A link where users can flag a problem with the review.
"googleMapsUri": "A String", # A link to show the review on Google Maps.
"relativePublishTimeDescription": "A String", # A string of formatted recent time, expressing the review time relative to the current time in a form appropriate for the language and country.
"review": "A String", # A reference representing this place review which may be used to look up this place review again.
},
],
},
"placeId": "A String", # This Place's resource name, in `places/{place_id}` format. Can be used to look up the Place.
"text": "A String", # Text of the chunk.
"title": "A String", # Title of the chunk.
"uri": "A String", # URI reference of the chunk.
},
"retrievedContext": { # Chunk from context retrieved by the retrieval tools. # Grounding chunk from context retrieved by the retrieval tools.
"documentName": "A String", # Output only. The full document name for the referenced Vertex AI Search document.
"ragChunk": { # A RagChunk includes the content of a chunk of a RagFile, and associated metadata. # Additional context for the RAG retrieval result. This is only populated when using the RAG retrieval tool.
"pageSpan": { # Represents where the chunk starts and ends in the document. # If populated, represents where the chunk starts and ends in the document.
"firstPage": 42, # Page where chunk starts in the document. Inclusive. 1-indexed.
"lastPage": 42, # Page where chunk ends in the document. Inclusive. 1-indexed.
},
"text": "A String", # The content of the chunk.
},
"text": "A String", # Text of the attribution.
"title": "A String", # Title of the attribution.
"uri": "A String", # URI reference of the attribution.
},
"web": { # Chunk from the web. # Grounding chunk from the web.
"domain": "A String", # Domain of the (original) URI.
"title": "A String", # Title of the chunk.
"uri": "A String", # URI reference of the chunk.
},
},
],
"groundingSupports": [ # Optional. List of grounding support.
{ # Grounding support.
"confidenceScores": [ # Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. For Gemini 2.0 and before, this list must have the same size as the grounding_chunk_indices. For Gemini 2.5 and after, this list will be empty and should be ignored.
3.14,
],
"groundingChunkIndices": [ # A list of indices (into 'grounding_chunk') specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.
42,
],
"segment": { # Segment of the content. # Segment of the content this support belongs to.
"endIndex": 42, # Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
"partIndex": 42, # Output only. The index of a Part object within its parent Content object.
"startIndex": 42, # Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
"text": "A String", # Output only. The text corresponding to the segment from the response.
},
},
],
"retrievalMetadata": { # Metadata related to retrieval in the grounding flow. # Optional. Output only. Retrieval metadata.
"googleSearchDynamicRetrievalScore": 3.14, # Optional. Score indicating how likely information from Google Search could help answer the prompt. The score is in the range `[0, 1]`, where 0 is the least likely and 1 is the most likely. This score is only populated when Google Search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger Google Search.
},
"searchEntryPoint": { # Google search entry point. # Optional. Google search entry for the following-up web searches.
"renderedContent": "A String", # Optional. Web content snippet that can be embedded in a web page or an app webview.
"sdkBlob": "A String", # Optional. Base64 encoded JSON representing array of tuple.
},
"webSearchQueries": [ # Optional. Web search queries for the following-up web search.
"A String",
],
},
"index": 42, # Output only. Index of the candidate.
"logprobsResult": { # Logprobs Result # Output only. Log-likelihood scores for the response tokens and top tokens
"chosenCandidates": [ # Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.
{ # Candidate for the logprobs token and score.
"logProbability": 3.14, # The candidate's log probability.
"token": "A String", # The candidate's token string value.
"tokenId": 42, # The candidate's token id value.
},
],
"topCandidates": [ # Length = total number of decoding steps.
{ # Candidates with top log probabilities at each decoding step.
"candidates": [ # Sorted by log probability in descending order.
{ # Candidate for the logprobs token and score.
"logProbability": 3.14, # The candidate's log probability.
"token": "A String", # The candidate's token string value.
"tokenId": 42, # The candidate's token id value.
},
],
},
],
},
"safetyRatings": [ # Output only. List of ratings for the safety of a response candidate. There is at most one rating per category.
{ # Safety rating corresponding to the generated content.
"blocked": True or False, # Output only. Indicates whether the content was filtered out because of this rating.
"category": "A String", # Output only. Harm category.
"overwrittenThreshold": "A String", # Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
"probability": "A String", # Output only. Harm probability levels in the content.
"probabilityScore": 3.14, # Output only. Harm probability score.
"severity": "A String", # Output only. Harm severity levels in the content.
"severityScore": 3.14, # Output only. Harm severity score.
},
],
"urlContextMetadata": { # Metadata related to url context retrieval tool. # Output only. Metadata related to url context retrieval tool.
"urlMetadata": [ # Output only. List of url context.
{ # Context of the a single url retrieval.
"retrievedUrl": "A String", # Retrieved url by the tool.
"urlRetrievalStatus": "A String", # Status of the url retrieval.
},
],
},
},
],
"createTime": "A String", # Output only. Timestamp when the request is made to the server.
"modelVersion": "A String", # Output only. The model version used to generate the response.
"promptFeedback": { # Content filter results for a prompt sent in the request. # Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.
"blockReason": "A String", # Output only. Blocked reason.
"blockReasonMessage": "A String", # Output only. A readable block reason message.
"safetyRatings": [ # Output only. Safety ratings.
{ # Safety rating corresponding to the generated content.
"blocked": True or False, # Output only. Indicates whether the content was filtered out because of this rating.
"category": "A String", # Output only. Harm category.
"overwrittenThreshold": "A String", # Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
"probability": "A String", # Output only. Harm probability levels in the content.
"probabilityScore": 3.14, # Output only. Harm probability score.
"severity": "A String", # Output only. Harm severity levels in the content.
"severityScore": 3.14, # Output only. Harm severity score.
},
],
},
"responseId": "A String", # Output only. response_id is used to identify each response. It is the encoding of the event_id.
"usageMetadata": { # Usage metadata about response(s). # Usage metadata about the response(s).
"cacheTokensDetails": [ # Output only. List of modalities of the cached content in the request input.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"cachedContentTokenCount": 42, # Output only. Number of tokens in the cached part in the input (the cached content).
"candidatesTokenCount": 42, # Number of tokens in the response(s).
"candidatesTokensDetails": [ # Output only. List of modalities that were returned in the response.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"promptTokenCount": 42, # Number of tokens in the request. When `cached_content` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
"promptTokensDetails": [ # Output only. List of modalities that were processed in the request input.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"thoughtsTokenCount": 42, # Output only. Number of tokens present in thoughts output.
"toolUsePromptTokenCount": 42, # Output only. Number of tokens present in tool-use prompt(s).
"toolUsePromptTokensDetails": [ # Output only. List of modalities that were processed for tool-use request inputs.
{ # Represents token counting info for a single modality.
"modality": "A String", # The modality associated with this token count.
"tokenCount": 42, # Number of tokens.
},
],
"totalTokenCount": 42, # Total token count for prompt, response candidates, and tool-use prompts (if present).
"trafficType": "A String", # Output only. Traffic type. This shows whether a request consumes Pay-As-You-Go or Provisioned Throughput quota.
},
}</pre>
</div>
<div class="method">
<code class="details" id="streamRawPredict">streamRawPredict(endpoint, body=None, x__xgafv=None)</code>
<pre>Perform a streaming online prediction with an arbitrary HTTP payload.
Args:
endpoint: string, Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for PredictionService.StreamRawPredict.
"httpBody": { # Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged. # The prediction input. Supports HTTP headers and arbitrary data payload.
"contentType": "A String", # The HTTP Content-Type header value specifying the content type of the body.
"data": "A String", # The HTTP request/response body as raw binary.
"extensions": [ # Application specific response metadata. Must be set in the first response for streaming APIs.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.
"contentType": "A String", # The HTTP Content-Type header value specifying the content type of the body.
"data": "A String", # The HTTP request/response body as raw binary.
"extensions": [ # Application specific response metadata. Must be set in the first response for streaming APIs.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="undeployModel">undeployModel(endpoint, body=None, x__xgafv=None)</code>
<pre>Undeploys a Model from an Endpoint, removing a DeployedModel from it, and freeing all resources it's using.
Args:
endpoint: string, Required. The name of the Endpoint resource from which to undeploy a Model. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}` (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EndpointService.UndeployModel.
"deployedModelId": "A String", # Required. The ID of the DeployedModel to be undeployed from the Endpoint.
"trafficSplit": { # If this field is provided, then the Endpoint's traffic_split will be overwritten with it. If last DeployedModel is being undeployed from the Endpoint, the [Endpoint.traffic_split] will always end up empty when this call returns. A DeployedModel will be successfully undeployed only if it doesn't have any traffic assigned to it when this method executes, or if this field unassigns any traffic to it.
"a_key": 42,
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="update">update(name, body=None, x__xgafv=None)</code>
<pre>Updates an Endpoint with a long running operation.
Args:
name: string, Output only. The resource name of the Endpoint. (required)
body: object, The request body.
The object takes the form of:
{ # Request message for EndpointService.UpdateEndpointLongRunning.
"endpoint": { # Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations. # Required. The Endpoint which replaces the resource on the server. Currently we only support updating the `client_connection_config` field, all the other fields' update will be blocked.
"clientConnectionConfig": { # Configurations (e.g. inference timeout) that are applied on your endpoints. # Configurations that are applied to the endpoint for online prediction.
"inferenceTimeout": "A String", # Customizable online prediction request timeout.
},
"createTime": "A String", # Output only. Timestamp when this Endpoint was created.
"dedicatedEndpointDns": "A String", # Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Depending on the features enabled, uid might be a random number or a string. For example, if fast_tryout is enabled, uid will be fasttryout. Format: `https://{endpoint_id}.{region}-{uid}.prediction.vertexai.goog`.
"dedicatedEndpointEnabled": True or False, # If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.
"deployedModels": [ # Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
{ # A deployment of a Model. Endpoints contain one or more DeployedModels.
"automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
"minReplicaCount": 42, # Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
},
"checkpointId": "A String", # The checkpoint id of the model.
"createTime": "A String", # Output only. Timestamp when the DeployedModel was created.
"dedicatedResources": { # A description of resources that are dedicated to a DeployedModel or DeployedIndex, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
"autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
{ # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
"metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` * `aiplatform.googleapis.com/prediction/online/request_count`
"target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
},
],
"machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine being used.
"acceleratorCount": 42, # The number of accelerators to attach to the machine.
"acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
"machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
"reservationAffinity": { # A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity. # Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
"key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
"reservationAffinityType": "A String", # Required. Specifies the reservation affinity type.
"values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
"A String",
],
},
"tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
},
"maxReplicaCount": 42, # Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
"minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
"requiredReplicaCount": 42, # Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
"spot": True or False, # Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
},
"disableContainerLogging": True or False, # For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.
"disableExplanations": True or False, # If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
"displayName": "A String", # The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
"enableAccessLogging": True or False, # If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
"explanationSpec": { # Specification of Model explanation. # Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
"metadata": { # Metadata describing the Model's input and output for explanation. # Optional. Metadata describing the Model's input and output for explanation.
"featureAttributionsSchemaUri": "A String", # Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
"inputs": { # Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
"a_key": { # Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
"denseShapeTensorName": "A String", # Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"encodedBaselines": [ # A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
"",
],
"encodedTensorName": "A String", # Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.
"encoding": "A String", # Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
"featureValueDomain": { # Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. # The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
"maxValue": 3.14, # The maximum permissible value for this feature.
"minValue": 3.14, # The minimum permissible value for this feature.
"originalMean": 3.14, # If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.
"originalStddev": 3.14, # If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.
},
"groupName": "A String", # Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.
"indexFeatureMapping": [ # A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
"A String",
],
"indicesTensorName": "A String", # Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
"inputBaselines": [ # Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
"",
],
"inputTensorName": "A String", # Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
"modality": "A String", # Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
"visualization": { # Visualization configurations for image explanation. # Visualization configurations for image explanation.
"clipPercentLowerbound": 3.14, # Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
"clipPercentUpperbound": 3.14, # Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
"colorMap": "A String", # The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.
"overlayType": "A String", # How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
"polarity": "A String", # Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
"type": "A String", # Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
},
},
},
"latentSpaceSource": "A String", # Name of the source to generate embeddings for example based explanations.
"outputs": { # Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
"a_key": { # Metadata of the prediction output to be explained.
"displayNameMappingKey": "A String", # Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.
"indexDisplayNameMapping": "", # Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.
"outputTensorName": "A String", # Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.
},
},
},
"parameters": { # Parameters to configure explaining for Model's predictions. # Required. Parameters that configure explaining of the Model's predictions.
"examples": { # Example-based explainability that returns the nearest neighbors from the provided dataset. # Example-based explanations that returns the nearest neighbors from the provided dataset.
"exampleGcsSource": { # The Cloud Storage input instances. # The Cloud Storage input instances.
"dataFormat": "A String", # The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
"gcsSource": { # The Google Cloud Storage location for the input content. # The Cloud Storage location for the input instances.
"uris": [ # Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/wildcards.
"A String",
],
},
},
"nearestNeighborSearchConfig": "", # The full configuration for the generated index, the semantics are the same as metadata and should match [NearestNeighborSearchConfig](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations-example-based#nearest-neighbor-search-config).
"neighborCount": 42, # The number of neighbors to return when querying for examples.
"presets": { # Preset configuration for example-based explanations # Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
"modality": "A String", # The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
"query": "A String", # Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to `PRECISE`.
},
},
"integratedGradientsAttribution": { # An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 # An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
},
"outputIndices": [ # If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
"",
],
"sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. # An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
"pathCount": 42, # Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
},
"topK": 42, # If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
"xraiAttribution": { # An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models. # An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
"blurBaselineConfig": { # Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383 # Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
"maxBlurSigma": 3.14, # The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
},
"smoothGradConfig": { # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf # Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
"featureNoiseSigma": { # Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients. # This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
"noiseSigma": [ # Noise sigma per feature. No noise is added to features that are not set.
{ # Noise sigma for a single feature.
"name": "A String", # The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
"sigma": 3.14, # This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
},
],
},
"noiseSigma": 3.14, # This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about [normalization](https://developers.google.com/machine-learning/data-prep/transform/normalization). For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
"noisySampleCount": 42, # The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
},
"stepCount": 42, # Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
},
},
},
"fasterDeploymentConfig": { # Configuration for faster model deployment. # Configuration for faster model deployment.
"fastTryoutEnabled": True or False, # If true, enable fast tryout feature for this deployed model.
},
"gdcConnectedModel": "A String", # GDC pretrained / Gemini model name. The model name is a plain model name, e.g. gemini-1.5-flash-002.
"id": "A String", # Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are `/[0-9]/`.
"model": "A String", # The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: `projects/{project}/locations/{location}/models/{model}@2` or `projects/{project}/locations/{location}/models/{model}@golden` if no version is specified, the default version will be deployed.
"modelVersionId": "A String", # Output only. The version ID of the model that is deployed.
"privateEndpoints": { # PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment. # Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
"explainHttpUri": "A String", # Output only. Http(s) path to send explain requests.
"healthHttpUri": "A String", # Output only. Http(s) path to send health check requests.
"predictHttpUri": "A String", # Output only. Http(s) path to send prediction requests.
"serviceAttachment": "A String", # Output only. The name of the service attachment resource. Populated if private service connect is enabled.
},
"serviceAccount": "A String", # The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the `iam.serviceAccounts.actAs` permission on this service account.
"sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`
"speculativeDecodingSpec": { # Configuration for Speculative Decoding. # Optional. Spec for configuring speculative decoding.
"draftModelSpeculation": { # Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding. # draft model speculation.
"draftModel": "A String", # Required. The resource name of the draft model.
},
"ngramSpeculation": { # N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens. # N-Gram speculation.
"ngramSize": 42, # The number of last N input tokens used as ngram to search/match against the previous prompt sequence. This is equal to the N in N-Gram. The default value is 3 if not specified.
},
"speculativeTokenCount": 42, # The number of speculative tokens to generate at each step.
},
"status": { # Runtime status of the deployed model. # Output only. Runtime status of the deployed model.
"availableReplicaCount": 42, # Output only. The number of available replicas of the deployed model.
"lastUpdateTime": "A String", # Output only. The time at which the status was last updated.
"message": "A String", # Output only. The latest deployed model's status message (if any).
},
"systemLabels": { # System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.
"a_key": "A String",
},
},
],
"description": "A String", # The description of the Endpoint.
"displayName": "A String", # Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
"enablePrivateServiceConnect": True or False, # Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
"encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
"kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
},
"etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
"gdcConfig": { # Google Distributed Cloud (GDC) config. # Configures the Google Distributed Cloud (GDC) environment for online prediction. Only set this field when the Endpoint is to be deployed in a GDC environment.
"zone": "A String", # GDC zone. A cluster will be designated for the Vertex AI workload in this zone.
},
"genAiAdvancedFeaturesConfig": { # Configuration for GenAiAdvancedFeatures. # Optional. Configuration for GenAiAdvancedFeatures. If the endpoint is serving GenAI models, advanced features like native RAG integration can be configured. Currently, only Model Garden models are supported.
"ragConfig": { # Configuration for Retrieval Augmented Generation feature. # Configuration for Retrieval Augmented Generation feature.
"enableRag": True or False, # If true, enable Retrieval Augmented Generation in ChatCompletion request. Once enabled, the endpoint will be identified as GenAI endpoint and Arthedain router will be used.
},
},
"labels": { # The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
"a_key": "A String",
},
"modelDeploymentMonitoringJob": "A String", # Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}`
"name": "A String", # Output only. The resource name of the Endpoint.
"network": "A String", # Optional. The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where `{project}` is a project number, as in `12345`, and `{network}` is network name.
"predictRequestResponseLoggingConfig": { # Configuration for logging request-response to a BigQuery table. # Configures the request-response logging for online prediction.
"bigqueryDestination": { # The BigQuery location for the output content. # BigQuery table for logging. If only given a project, a new dataset will be created with name `logging__` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name `request_response_logging`
"outputUri": "A String", # Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: `bq://projectId` or `bq://projectId.bqDatasetId` or `bq://projectId.bqDatasetId.bqTableId`.
},
"enabled": True or False, # If logging is enabled or not.
"samplingRate": 3.14, # Percentage of requests to be logged, expressed as a fraction in range(0,1].
},
"privateServiceConnectConfig": { # Represents configuration for private service connect. # Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
"enablePrivateServiceConnect": True or False, # Required. If true, expose the IndexEndpoint via private service connect.
"projectAllowlist": [ # A list of Projects from which the forwarding rule will target the service attachment.
"A String",
],
"pscAutomationConfigs": [ # Optional. List of projects and networks where the PSC endpoints will be created. This field is used by Online Inference(Prediction) only.
{ # PSC config that is used to automatically create PSC endpoints in the user projects.
"errorMessage": "A String", # Output only. Error message if the PSC service automation failed.
"forwardingRule": "A String", # Output only. Forwarding rule created by the PSC service automation.
"ipAddress": "A String", # Output only. IP address rule created by the PSC service automation.
"network": "A String", # Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/get): `projects/{project}/global/networks/{network}`.
"projectId": "A String", # Required. Project id used to create forwarding rule.
"state": "A String", # Output only. The state of the PSC service automation.
},
],
"serviceAttachment": "A String", # Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.
},
"satisfiesPzi": True or False, # Output only. Reserved for future use.
"satisfiesPzs": True or False, # Output only. Reserved for future use.
"trafficSplit": { # A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
"a_key": 42,
},
"updateTime": "A String", # Output only. Timestamp when this Endpoint was last updated.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
"done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
"error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
"details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
"message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
"metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
"name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
"response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
</body></html>
|