1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690
|
/*
* SPDX-FileCopyrightText: Copyright (c) 2013-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
//
// This file provides the interface that RM exposes to UVM.
//
#ifndef _NV_UVM_INTERFACE_H_
#define _NV_UVM_INTERFACE_H_
// Forward references, to break circular header file dependencies:
struct UvmOpsUvmEvents;
#if defined(NVIDIA_UVM_ENABLED)
// We are in the UVM build system, for a Linux target.
#include "uvm_linux.h"
#else
// We are in the RM build system, for a Linux target:
#include "nv-linux.h"
#endif // NVIDIA_UVM_ENABLED
#include "nvgputypes.h"
#include "nvstatus.h"
#include "nv_uvm_types.h"
// Define the type here as it's Linux specific, used only by the Linux specific
// nvUvmInterfaceRegisterGpu() API.
typedef struct
{
struct pci_dev *pci_dev;
// DMA addressable range of the device, mirrors fields in nv_state_t.
NvU64 dma_addressable_start;
NvU64 dma_addressable_limit;
} UvmGpuPlatformInfo;
/*******************************************************************************
nvUvmInterfaceRegisterGpu
Registers the GPU with the provided UUID for use. A GPU must be registered
before its UUID can be used with any other API. This call is ref-counted so
every nvUvmInterfaceRegisterGpu must be paired with a corresponding
nvUvmInterfaceUnregisterGpu.
You don't need to call nvUvmInterfaceSessionCreate before calling this.
Error codes:
NV_ERR_GPU_UUID_NOT_FOUND
NV_ERR_NO_MEMORY
NV_ERR_GENERIC
*/
NV_STATUS nvUvmInterfaceRegisterGpu(const NvProcessorUuid *gpuUuid, UvmGpuPlatformInfo *gpuInfo);
/*******************************************************************************
nvUvmInterfaceUnregisterGpu
Unregisters the GPU with the provided UUID. This drops the ref count from
nvUvmInterfaceRegisterGpu. Once the reference count goes to 0 the device may
no longer be accessible until the next nvUvmInterfaceRegisterGpu call. No
automatic resource freeing is performed, so only make the last unregister
call after destroying all your allocations associated with that UUID (such
as those from nvUvmInterfaceAddressSpaceCreate).
If the UUID is not found, no operation is performed.
*/
void nvUvmInterfaceUnregisterGpu(const NvProcessorUuid *gpuUuid);
/*******************************************************************************
nvUvmInterfaceSessionCreate
TODO: Creates session object. All allocations are tied to the session.
The platformInfo parameter is filled by the callee with miscellaneous system
information. Refer to the UvmPlatformInfo struct for details.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceSessionCreate(uvmGpuSessionHandle *session,
UvmPlatformInfo *platformInfo);
/*******************************************************************************
nvUvmInterfaceSessionDestroy
Destroys a session object. All allocations are tied to the session will
be destroyed.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceSessionDestroy(uvmGpuSessionHandle session);
/*******************************************************************************
nvUvmInterfaceDeviceCreate
Creates a device object under the given session for the GPU with the given
UUID. Also creates a partition object for the device iff bCreateSmcPartition
is true and pGpuInfo->smcEnabled is true. pGpuInfo->smcUserClientInfo will
be used to determine the SMC partition in this case. A device handle is
returned in the device output parameter.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
NV_ERR_INVALID_ARGUMENT
NV_ERR_INSUFFICIENT_RESOURCES
NV_ERR_OBJECT_NOT_FOUND
*/
NV_STATUS nvUvmInterfaceDeviceCreate(uvmGpuSessionHandle session,
const UvmGpuInfo *pGpuInfo,
const NvProcessorUuid *gpuUuid,
uvmGpuDeviceHandle *device,
NvBool bCreateSmcPartition);
/*******************************************************************************
nvUvmInterfaceDeviceDestroy
Destroys the device object for the given handle. The handle must have been
obtained in a prior call to nvUvmInterfaceDeviceCreate.
*/
void nvUvmInterfaceDeviceDestroy(uvmGpuDeviceHandle device);
/*******************************************************************************
nvUvmInterfaceAddressSpaceCreate
This function creates an address space.
This virtual address space is created on the GPU specified
by device.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceAddressSpaceCreate(uvmGpuDeviceHandle device,
unsigned long long vaBase,
unsigned long long vaSize,
uvmGpuAddressSpaceHandle *vaSpace,
UvmGpuAddressSpaceInfo *vaSpaceInfo);
/*******************************************************************************
nvUvmInterfaceDupAddressSpace
This function will dup the given vaspace from the users client to the
kernel client was created as an ops session.
By duping the vaspace it is guaranteed that RM will refcount the vaspace object.
Error codes:
NV_ERR_GENERIC
*/
NV_STATUS nvUvmInterfaceDupAddressSpace(uvmGpuDeviceHandle device,
NvHandle hUserClient,
NvHandle hUserVASpace,
uvmGpuAddressSpaceHandle *vaSpace,
UvmGpuAddressSpaceInfo *vaSpaceInfo);
/*******************************************************************************
nvUvmInterfaceAddressSpaceDestroy
Destroys an address space that was previously created via
nvUvmInterfaceAddressSpaceCreate.
*/
void nvUvmInterfaceAddressSpaceDestroy(uvmGpuAddressSpaceHandle vaSpace);
/*******************************************************************************
nvUvmInterfaceMemoryAllocFB
This function will allocate video memory and provide a mapped Gpu
virtual address to this allocation. It also returns the Gpu physical
offset if contiguous allocations are requested.
This function will allocate a minimum page size if the length provided is 0
and will return a unique GPU virtual address.
The default page size will be the small page size (as returned by query
caps). The physical alignment will also be enforced to small page
size(64K/128K).
Arguments:
vaSpace[IN] - Pointer to vaSpace object
length [IN] - Length of the allocation
gpuPointer[OUT] - GPU VA mapping
allocInfo[IN/OUT] - Pointer to allocation info structure which
contains below given fields
allocInfo Members:
gpuPhysOffset[OUT] - Physical offset of allocation returned only
if contiguous allocation is requested.
pageSize[IN] - Override the default page size (see above).
alignment[IN] - gpuPointer GPU VA alignment. 0 means 4KB
alignment.
bContiguousPhysAlloc[IN] - Flag to request contiguous allocation. Default
will follow the vidHeapControl default policy.
bMemGrowsDown[IN]
bPersistentVidmem[IN] - Allocate persistent vidmem.
hPhysHandle[IN/OUT] - The handle will be used in allocation if provided.
If not provided; allocator will return the handle
it used eventually.
Error codes:
NV_ERR_INVALID_ARGUMENT
NV_ERR_NO_MEMORY - Not enough physical memory to service
allocation request with provided constraints
NV_ERR_INSUFFICIENT_RESOURCES - Not enough available resources to satisfy allocation request
NV_ERR_INVALID_OWNER - Target memory not accessible by specified owner
NV_ERR_NOT_SUPPORTED - Operation not supported on broken FB
*/
NV_STATUS nvUvmInterfaceMemoryAllocFB(uvmGpuAddressSpaceHandle vaSpace,
NvLength length,
UvmGpuPointer * gpuPointer,
UvmGpuAllocInfo * allocInfo);
/*******************************************************************************
nvUvmInterfaceMemoryAllocSys
This function will allocate system memory and provide a mapped Gpu
virtual address to this allocation.
This function will allocate a minimum page size if the length provided is 0
and will return a unique GPU virtual address.
The default page size will be the small page size (as returned by query caps)
Arguments:
vaSpace[IN] - Pointer to vaSpace object
length [IN] - Length of the allocation
gpuPointer[OUT] - GPU VA mapping
allocInfo[IN/OUT] - Pointer to allocation info structure which
contains below given fields
allocInfo Members:
gpuPhysOffset[OUT] - Physical offset of allocation returned only
if contiguous allocation is requested.
pageSize[IN] - Override the default page size (see above).
alignment[IN] - gpuPointer GPU VA alignment. 0 means 4KB
alignment.
bContiguousPhysAlloc[IN] - Flag to request contiguous allocation. Default
will follow the vidHeapControl default policy.
bMemGrowsDown[IN]
bPersistentVidmem[IN] - Allocate persistent vidmem.
hPhysHandle[IN/OUT] - The handle will be used in allocation if provided.
If not provided; allocator will return the handle
it used eventually.
Error codes:
NV_ERR_INVALID_ARGUMENT
NV_ERR_NO_MEMORY - Not enough physical memory to service
allocation request with provided constraints
NV_ERR_INSUFFICIENT_RESOURCES - Not enough available resources to satisfy allocation request
NV_ERR_INVALID_OWNER - Target memory not accessible by specified owner
NV_ERR_NOT_SUPPORTED - Operation not supported
*/
NV_STATUS nvUvmInterfaceMemoryAllocSys(uvmGpuAddressSpaceHandle vaSpace,
NvLength length,
UvmGpuPointer * gpuPointer,
UvmGpuAllocInfo * allocInfo);
/*******************************************************************************
nvUvmInterfaceGetP2PCaps
Obtain the P2P capabilities between two devices.
Arguments:
device1[IN] - Device handle of the first GPU (required)
device2[IN] - Device handle of the second GPU (required)
p2pCapsParams [OUT] - P2P capabilities between the two GPUs
Error codes:
NV_ERR_INVALID_ARGUMENT
NV_ERR_GENERIC:
Unexpected error. We try hard to avoid returning this error
code,because it is not very informative.
*/
NV_STATUS nvUvmInterfaceGetP2PCaps(uvmGpuDeviceHandle device1,
uvmGpuDeviceHandle device2,
UvmGpuP2PCapsParams * p2pCapsParams);
/*******************************************************************************
nvUvmInterfaceGetPmaObject
This function will return pointer to PMA object for the given GPU. This
PMA object handle is required for page allocation.
Arguments:
device [IN] - Device handle allocated in
nvUvmInterfaceDeviceCreate
pPma [OUT] - Pointer to PMA object
pPmaPubStats [OUT] - Pointer to UvmPmaStatistics object
Error codes:
NV_ERR_NOT_SUPPORTED - Operation not supported on broken FB
NV_ERR_GENERIC:
Unexpected error. We try hard to avoid returning this error
code,because it is not very informative.
*/
NV_STATUS nvUvmInterfaceGetPmaObject(uvmGpuDeviceHandle device,
void **pPma,
const UvmPmaStatistics **pPmaPubStats);
// Mirrors pmaEvictPagesCb_t, see its documentation in pma.h.
typedef NV_STATUS (*uvmPmaEvictPagesCallback)(void *callbackData,
NvU64 pageSize,
NvU64 *pPages,
NvU32 count,
NvU64 physBegin,
NvU64 physEnd,
UVM_PMA_GPU_MEMORY_TYPE mem_type);
// Mirrors pmaEvictRangeCb_t, see its documentation in pma.h.
typedef NV_STATUS (*uvmPmaEvictRangeCallback)(void *callbackData,
NvU64 physBegin,
NvU64 physEnd,
UVM_PMA_GPU_MEMORY_TYPE mem_type);
/*******************************************************************************
nvUvmInterfacePmaRegisterEvictionCallbacks
Simple wrapper for pmaRegisterEvictionCb(), see its documentation in pma.h.
*/
NV_STATUS nvUvmInterfacePmaRegisterEvictionCallbacks(void *pPma,
uvmPmaEvictPagesCallback evictPages,
uvmPmaEvictRangeCallback evictRange,
void *callbackData);
/******************************************************************************
nvUvmInterfacePmaUnregisterEvictionCallbacks
Simple wrapper for pmaUnregisterEvictionCb(), see its documentation in pma.h.
*/
void nvUvmInterfacePmaUnregisterEvictionCallbacks(void *pPma);
/*******************************************************************************
nvUvmInterfacePmaAllocPages
@brief Synchronous API for allocating pages from the PMA.
PMA will decide which pma regions to allocate from based on the provided
flags. PMA will also initiate UVM evictions to make room for this
allocation unless prohibited by PMA_FLAGS_DONT_EVICT. UVM callers must pass
this flag to avoid deadlock. Only UVM may allocated unpinned memory from
this API.
For broadcast methods, PMA will guarantee the same physical frames are
allocated on multiple GPUs, specified by the PMA objects passed in.
If allocation is contiguous, only one page in pPages will be filled.
Also, contiguous flag must be passed later to nvUvmInterfacePmaFreePages.
Arguments:
pPma[IN] - Pointer to PMA object
pageCount [IN] - Number of pages required to be allocated.
pageSize [IN] - 64kb, 128kb or 2mb. No other values are permissible.
pPmaAllocOptions[IN] - Pointer to PMA allocation info structure.
pPages[OUT] - Array of pointers, containing the PA base
address of each page.
Error codes:
NV_ERR_NO_MEMORY:
Internal memory allocation failed.
NV_ERR_GENERIC:
Unexpected error. We try hard to avoid returning this error
code,because it is not very informative.
*/
NV_STATUS nvUvmInterfacePmaAllocPages(void *pPma,
NvLength pageCount,
NvU64 pageSize,
UvmPmaAllocationOptions *pPmaAllocOptions,
NvU64 *pPages);
/*******************************************************************************
nvUvmInterfacePmaPinPages
This function will pin the physical memory allocated using PMA. The pages
passed as input must be unpinned else this function will return an error and
rollback any change if any page is not previously marked "unpinned".
Arguments:
pPma[IN] - Pointer to PMA object.
pPages[IN] - Array of pointers, containing the PA base
address of each page to be pinned.
pageCount [IN] - Number of pages required to be pinned.
pageSize [IN] - Page size of each page to be pinned.
flags [IN] - UVM_PMA_CALLED_FROM_PMA_EVICTION if called from
PMA eviction, 0 otherwise.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid input arguments.
NV_ERR_GENERIC - Unexpected error. We try hard to avoid
returning this error code as is not very
informative.
NV_ERR_NOT_SUPPORTED - Operation not supported on broken FB
*/
NV_STATUS nvUvmInterfacePmaPinPages(void *pPma,
NvU64 *pPages,
NvLength pageCount,
NvU64 pageSize,
NvU32 flags);
/*******************************************************************************
nvUvmInterfacePmaUnpinPages
This function will unpin the physical memory allocated using PMA. The pages
passed as input must be already pinned, else this function will return an
error and rollback any change if any page is not previously marked "pinned".
Behaviour is undefined if any blacklisted pages are unpinned.
Arguments:
pPma[IN] - Pointer to PMA object.
pPages[IN] - Array of pointers, containing the PA base
address of each page to be unpinned.
pageCount [IN] - Number of pages required to be unpinned.
pageSize [IN] - Page size of each page to be unpinned.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid input arguments.
NV_ERR_GENERIC - Unexpected error. We try hard to avoid
returning this error code as is not very
informative.
NV_ERR_NOT_SUPPORTED - Operation not supported on broken FB
*/
NV_STATUS nvUvmInterfacePmaUnpinPages(void *pPma,
NvU64 *pPages,
NvLength pageCount,
NvU64 pageSize);
/*******************************************************************************
nvUvmInterfaceMemoryFree
Free up a GPU allocation
*/
void nvUvmInterfaceMemoryFree(uvmGpuAddressSpaceHandle vaSpace,
UvmGpuPointer gpuPointer);
/*******************************************************************************
nvUvmInterfacePmaFreePages
This function will free physical memory allocated using PMA. It marks a list
of pages as free. This operation is also used by RM to mark pages as "scrubbed"
for the initial ECC sweep. This function does not fail.
When allocation was contiguous, an appropriate flag needs to be passed.
Arguments:
pPma[IN] - Pointer to PMA object
pPages[IN] - Array of pointers, containing the PA base
address of each page.
pageCount [IN] - Number of pages required to be allocated.
pageSize [IN] - Page size of each page
flags [IN] - Flags with information about allocation type
with the same meaning as flags in options for
nvUvmInterfacePmaAllocPages. When called from PMA
eviction, UVM_PMA_CALLED_FROM_PMA_EVICTION needs
to be added to flags.
Error codes:
NV_ERR_INVALID_ARGUMENT
NV_ERR_NO_MEMORY - Not enough physical memory to service
allocation request with provided constraints
NV_ERR_INSUFFICIENT_RESOURCES - Not enough available resources to satisfy allocation request
NV_ERR_INVALID_OWNER - Target memory not accessible by specified owner
NV_ERR_NOT_SUPPORTED - Operation not supported on broken FB
*/
void nvUvmInterfacePmaFreePages(void *pPma,
NvU64 *pPages,
NvLength pageCount,
NvU64 pageSize,
NvU32 flags);
/*******************************************************************************
nvUvmInterfaceMemoryCpuMap
This function creates a CPU mapping to the provided GPU address.
If the address is not the same as what is returned by the Alloc
function, then the function will map it from the address provided.
This offset will be relative to the gpu offset obtained from the
memory alloc functions.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceMemoryCpuMap(uvmGpuAddressSpaceHandle vaSpace,
UvmGpuPointer gpuPointer,
NvLength length, void **cpuPtr,
NvU64 pageSize);
/*******************************************************************************
uvmGpuMemoryCpuUnmap
Unmaps the cpuPtr provided from the process virtual address space.
*/
void nvUvmInterfaceMemoryCpuUnMap(uvmGpuAddressSpaceHandle vaSpace,
void *cpuPtr);
/*******************************************************************************
nvUvmInterfaceTsgAllocate
This function allocates a Time-Slice Group (TSG).
allocParams must contain an engineIndex as TSGs need to be bound to an
engine type at allocation time. The possible values are [0,
UVM_COPY_ENGINE_COUNT_MAX) for CE engine type. Notably only the copy engines
that have UvmGpuCopyEngineCaps::supported set to true can be allocated.
Note that TSG is not supported on all GPU architectures for all engine
types, e.g., pre-Volta GPUs only support TSG for the GR/Compute engine type.
On devices that do not support HW TSGs on the requested engine, this API is
still required, i.e., a TSG handle is required in
nvUvmInterfaceChannelAllocate(), due to information stored in it necessary
for channel allocation. However, when HW TSGs aren't supported, a TSG handle
is essentially a "fake" TSG with no HW scheduling impact.
tsg is filled with the address of the corresponding TSG handle.
Arguments:
vaSpace[IN] - VA space linked to a client and a device under which
the TSG is allocated.
allocParams[IN] - structure with allocation settings.
tsg[OUT] - pointer to the new TSG handle.
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
NV_ERR_NO_MEMORY
NV_ERR_NOT_SUPPORTED
*/
NV_STATUS nvUvmInterfaceTsgAllocate(uvmGpuAddressSpaceHandle vaSpace,
const UvmGpuTsgAllocParams *allocParams,
uvmGpuTsgHandle *tsg);
/*******************************************************************************
nvUvmInterfaceTsgDestroy
This function destroys a given TSG.
Arguments:
tsg[IN] - Tsg handle
*/
void nvUvmInterfaceTsgDestroy(uvmGpuTsgHandle tsg);
/*******************************************************************************
nvUvmInterfaceChannelAllocate
This function will allocate a channel bound to a copy engine(CE) or a SEC2
engine.
allocParams contains information relative to GPFIFO and GPPut.
channel is filled with the address of the corresponding channel handle.
channelInfo is filled out with channel get/put. The errorNotifier is filled
out when the channel hits an RC error. On Volta+ devices, it also computes
the work submission token and the work submission offset to be used in the
Host channel submission doorbell.
Arguments:
tsg[IN] - Time-Slice Group that the channel will be a member.
allocParams[IN] - structure with allocation settings
channel[OUT] - pointer to the new channel handle
channelInfo[OUT] - structure filled with channel information
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
NV_ERR_NO_MEMORY
NV_ERR_NOT_SUPPORTED
*/
NV_STATUS nvUvmInterfaceChannelAllocate(const uvmGpuTsgHandle tsg,
const UvmGpuChannelAllocParams *allocParams,
uvmGpuChannelHandle *channel,
UvmGpuChannelInfo *channelInfo);
/*******************************************************************************
nvUvmInterfaceChannelDestroy
This function destroys a given channel.
Arguments:
channel[IN] - channel handle
*/
void nvUvmInterfaceChannelDestroy(uvmGpuChannelHandle channel);
/*******************************************************************************
nvUvmInterfaceQueryCaps
Return capabilities for the provided GPU.
If GPU does not exist, an error will be returned.
If the client is only interested in the capabilities of the Copy Engines of
the given GPU, use nvUvmInterfaceQueryCopyEnginesCaps instead.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceQueryCaps(uvmGpuDeviceHandle device,
UvmGpuCaps *caps);
/*******************************************************************************
nvUvmInterfaceQueryCopyEnginesCaps
Return the capabilities of all the Copy Engines for the provided GPU.
If the GPU does not exist, an error will be returned.
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
*/
NV_STATUS nvUvmInterfaceQueryCopyEnginesCaps(uvmGpuDeviceHandle device,
UvmGpuCopyEnginesCaps *caps);
/*******************************************************************************
nvUvmInterfaceGetGpuInfo
Return various gpu info, refer to the UvmGpuInfo struct for details.
If no gpu matching the uuid is found, an error will be returned.
On Ampere+ GPUs, pGpuClientInfo contains SMC information provided by the
client regarding the partition targeted in this operation.
Error codes:
NV_ERR_GENERIC
NV_ERR_INSUFFICIENT_RESOURCES
*/
NV_STATUS nvUvmInterfaceGetGpuInfo(const NvProcessorUuid *gpuUuid,
const UvmGpuClientInfo *pGpuClientInfo,
UvmGpuInfo *pGpuInfo);
/*******************************************************************************
nvUvmInterfaceServiceDeviceInterruptsRM
Tells RM to service all pending interrupts. This is helpful in ECC error
conditions when ECC error interrupt is set & error can be determined only
after ECC notifier will be set or reset.
Error codes:
NV_ERR_GENERIC
UVM_INVALID_ARGUMENTS
*/
NV_STATUS nvUvmInterfaceServiceDeviceInterruptsRM(uvmGpuDeviceHandle device);
/*******************************************************************************
nvUvmInterfaceSetPageDirectory
Sets pageDirectory in the provided location. Also moves the existing PDE to
the provided pageDirectory.
RM will propagate the update to all channels using the provided VA space.
All channels must be idle when this call is made.
Arguments:
vaSpace[IN} - VASpace Object
physAddress[IN] - Physical address of new page directory
numEntries[IN] - Number of entries including previous PDE which will be copied
bVidMemAperture[IN] - If set pageDirectory will reside in VidMem aperture else sysmem
pasid[IN] - PASID (Process Address Space IDentifier) of the process
corresponding to the VA space. Ignored unless the VA space
object has ATS enabled.
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceSetPageDirectory(uvmGpuAddressSpaceHandle vaSpace,
NvU64 physAddress, unsigned numEntries,
NvBool bVidMemAperture, NvU32 pasid);
/*******************************************************************************
nvUvmInterfaceUnsetPageDirectory
Unsets/Restores pageDirectory to RM's defined location.
Arguments:
vaSpace[IN} - VASpace Object
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceUnsetPageDirectory(uvmGpuAddressSpaceHandle vaSpace);
/*******************************************************************************
nvUvmInterfaceDupAllocation
Duplicate the given allocation in a different VA space.
The physical handle backing the source allocation is duplicated in
the GPU device associated with the destination VA space, and a new mapping
is created in that VA space.
The input allocation can be located in sysmem (i.e. allocated using
nvUvmInterfaceMemoryAllocSys) or vidmem (i.e. allocated using
nvUvmInterfaceMemoryAllocFB). If located in vidmem, duplication across
GPUs is not supported.
For duplication of physical memory use nvUvmInterfaceDupMemory.
Arguments:
srcVaSpace[IN] - Source VA space.
srcAddress[IN] - GPU VA in the source VA space. The provided address
should match one previously returned by
nvUvmInterfaceMemoryAllocFB or
nvUvmInterfaceMemoryAllocSys.
dstVaSpace[IN] - Destination VA space where the new mapping will be
created.
dstVaAlignment[IN] - Alignment of the GPU VA in the destination VA
space. 0 means 4KB alignment.
dstAddress[OUT] - Pointer to the GPU VA in the destination VA space.
Error codes:
NV_ERR_INVALID_ARGUMENT - If any of the inputs is invalid, or the source
and destination VA spaces are identical.
NV_ERR_OBJECT_NOT_FOUND - If the input allocation is not found in under
the provided VA space.
NV_ERR_NO_MEMORY - If there is no memory to back the duplicate,
or the associated metadata.
NV_ERR_NOT_SUPPORTED - If trying to duplicate vidmem across GPUs.
*/
NV_STATUS nvUvmInterfaceDupAllocation(uvmGpuAddressSpaceHandle srcVaSpace,
NvU64 srcAddress,
uvmGpuAddressSpaceHandle dstVaSpace,
NvU64 dstVaAlignment,
NvU64 *dstAddress);
/*******************************************************************************
nvUvmInterfaceDupMemory
Duplicates a physical memory allocation. If requested, provides information
about the allocation.
Arguments:
device[IN] - Device linked to a client under which
the phys memory needs to be duped.
hClient[IN] - Client owning the memory.
hPhysMemory[IN] - Phys memory which is to be duped.
hDupedHandle[OUT] - Handle of the duped memory object.
pGpuMemoryInfo[OUT] - see nv_uvm_types.h for more information.
This parameter can be NULL. (optional)
Error codes:
NV_ERR_INVALID_ARGUMENT - If the parameter/s is invalid.
NV_ERR_NOT_SUPPORTED - If the allocation is not a physical allocation.
NV_ERR_OBJECT_NOT_FOUND - If the allocation is not found in under the provided client.
*/
NV_STATUS nvUvmInterfaceDupMemory(uvmGpuDeviceHandle device,
NvHandle hClient,
NvHandle hPhysMemory,
NvHandle *hDupMemory,
UvmGpuMemoryInfo *pGpuMemoryInfo);
/*******************************************************************************
nvUvmInterfaceFreeDupedAllocation
Free the allocation represented by the physical handle used to create the
duped allocation.
Arguments:
device[IN] - Device handle used to dup the memory.
hPhysHandle[IN] - Handle representing the phys allocation.
Error codes:
NV_ERROR
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceFreeDupedHandle(uvmGpuDeviceHandle device,
NvHandle hPhysHandle);
/*******************************************************************************
nvUvmInterfaceGetFbInfo
Gets FB information from RM.
Arguments:
device[IN] - GPU device handle
fbInfo [OUT] - Pointer to FbInfo structure which contains
reservedHeapSize & heapSize
Error codes:
NV_ERROR
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceGetFbInfo(uvmGpuDeviceHandle device,
UvmGpuFbInfo * fbInfo);
/*******************************************************************************
nvUvmInterfaceGetEccInfo
Gets ECC information from RM.
Arguments:
device[IN] - GPU device handle
eccInfo [OUT] - Pointer to EccInfo structure
Error codes:
NV_ERROR
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceGetEccInfo(uvmGpuDeviceHandle device,
UvmGpuEccInfo * eccInfo);
/*******************************************************************************
nvUvmInterfaceOwnPageFaultIntr
This function transfers ownership of the replayable page fault interrupt,
between RM and UVM, for a particular GPU.
bOwnInterrupts == NV_TRUE: UVM is taking ownership from the RM. This causes
the following: RM will not service, enable or disable this interrupt and it
is up to the UVM driver to handle this interrupt. In this case, replayable
page fault interrupts are disabled by this function, before it returns.
bOwnInterrupts == NV_FALSE: UVM is returning ownership to the RM: in this
case, replayable page fault interrupts MUST BE DISABLED BEFORE CALLING this
function.
The cases above both result in transferring ownership of a GPU that has its
replayable page fault interrupts disabled. Doing otherwise would make it
very difficult to control which driver handles any interrupts that build up
during the hand-off.
The calling pattern should look like this:
UVM setting up a new GPU for operation:
UVM GPU LOCK
nvUvmInterfaceOwnPageFaultIntr(..., NV_TRUE)
UVM GPU UNLOCK
Enable replayable page faults for that GPU
UVM tearing down a GPU:
Disable replayable page faults for that GPU
UVM GPU GPU LOCK
nvUvmInterfaceOwnPageFaultIntr(..., NV_FALSE)
UVM GPU UNLOCK
Arguments:
gpuUuid[IN] - UUID of the GPU to operate on
bOwnInterrupts - Set to NV_TRUE for UVM to take ownership of the
replayable page fault interrupts. Set to NV_FALSE
to return ownership of the page fault interrupts
to RM.
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceOwnPageFaultIntr(uvmGpuDeviceHandle device, NvBool bOwnInterrupts);
/*******************************************************************************
nvUvmInterfaceInitFaultInfo
This function obtains fault buffer address, size and a few register mappings
for replayable faults, and creates a shadow buffer to store non-replayable
faults if the GPU supports it.
Arguments:
device[IN] - Device handle associated with the gpu
pFaultInfo[OUT] - information provided by RM for fault handling
Error codes:
NV_ERR_GENERIC
NV_ERR_NO_MEMORY
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceInitFaultInfo(uvmGpuDeviceHandle device,
UvmGpuFaultInfo *pFaultInfo);
/*******************************************************************************
nvUvmInterfaceDestroyFaultInfo
This function obtains destroys unmaps the fault buffer and clears faultInfo
for replayable faults, and frees the shadow buffer for non-replayable faults.
Arguments:
device[IN] - Device handle associated with the gpu
pFaultInfo[OUT] - information provided by RM for fault handling
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceDestroyFaultInfo(uvmGpuDeviceHandle device,
UvmGpuFaultInfo *pFaultInfo);
/*******************************************************************************
nvUvmInterfaceHasPendingNonReplayableFaults
This function tells whether there are pending non-replayable faults in the
client shadow fault buffer ready to be consumed.
NOTES:
- This function uses a pre-allocated stack per GPU (stored in the
UvmGpuFaultInfo object) for calls related to non-replayable faults from the
top half.
- Concurrent calls to this function using the same pFaultInfo are not
thread-safe due to pre-allocated stack. Therefore, locking is the caller's
responsibility.
- This function DOES NOT acquire the RM API or GPU locks. That is because
it is called during fault servicing, which could produce deadlocks.
Arguments:
pFaultInfo[IN] - information provided by RM for fault handling.
Contains a pointer to the shadow fault buffer
hasPendingFaults[OUT] - return value that tells if there are
non-replayable faults ready to be consumed by
the client
Error codes:
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceHasPendingNonReplayableFaults(UvmGpuFaultInfo *pFaultInfo,
NvBool *hasPendingFaults);
/*******************************************************************************
nvUvmInterfaceGetNonReplayableFaults
This function consumes all the non-replayable fault packets in the client
shadow fault buffer and copies them to the given buffer. It also returns the
number of faults that have been copied
NOTES:
- This function uses a pre-allocated stack per GPU (stored in the
UvmGpuFaultInfo object) for calls from the bottom half that handles
non-replayable faults.
- See nvUvmInterfaceHasPendingNonReplayableFaults for the implications of
using a shared stack.
- This function DOES NOT acquire the RM API or GPU locks. That is because
it is called during fault servicing, which could produce deadlocks.
Arguments:
pFaultInfo[IN] - information provided by RM for fault handling.
Contains a pointer to the shadow fault buffer
pFaultBuffer[OUT] - buffer provided by the client where fault buffers
are copied when they are popped out of the shadow
fault buffer (which is a circular queue).
numFaults[OUT] - return value that tells the number of faults copied
to the client's buffer
Error codes:
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceGetNonReplayableFaults(UvmGpuFaultInfo *pFaultInfo,
void *pFaultBuffer,
NvU32 *numFaults);
/*******************************************************************************
nvUvmInterfaceFlushReplayableFaultBuffer
This function sends an RPC to GSP in order to flush the HW replayable fault buffer.
NOTES:
- This function DOES NOT acquire the RM API or GPU locks. That is because
it is called during fault servicing, which could produce deadlocks.
Arguments:
device[IN] - Device handle associated with the gpu
Error codes:
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceFlushReplayableFaultBuffer(uvmGpuDeviceHandle device);
/*******************************************************************************
nvUvmInterfaceInitAccessCntrInfo
This function obtains access counter buffer address, size and a few register mappings
Arguments:
device[IN] - Device handle associated with the gpu
pAccessCntrInfo[OUT] - Information provided by RM for access counter handling
accessCntrIndex[IN] - Access counter index
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceInitAccessCntrInfo(uvmGpuDeviceHandle device,
UvmGpuAccessCntrInfo *pAccessCntrInfo,
NvU32 accessCntrIndex);
/*******************************************************************************
nvUvmInterfaceDestroyAccessCntrInfo
This function obtains, destroys, unmaps the access counter buffer and clears accessCntrInfo
Arguments:
device[IN] - Device handle associated with the gpu
pAccessCntrInfo[IN] - Information provided by RM for access counter handling
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceDestroyAccessCntrInfo(uvmGpuDeviceHandle device,
UvmGpuAccessCntrInfo *pAccessCntrInfo);
/*******************************************************************************
nvUvmInterfaceEnableAccessCntr
This function enables access counters using the given configuration
UVM is also taking ownership from the RM.
This causes the following: RM will not service, enable or disable this
interrupt and it is up to the UVM driver to handle this interrupt. In
this case, access counter notificaion interrupts are enabled by this
function before it returns.
Arguments:
device[IN] - Device handle associated with the gpu
pAccessCntrInfo[IN] - Pointer to structure filled out by nvUvmInterfaceInitAccessCntrInfo
pAccessCntrConfig[IN] - Configuration for access counters
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceEnableAccessCntr(uvmGpuDeviceHandle device,
UvmGpuAccessCntrInfo *pAccessCntrInfo,
UvmGpuAccessCntrConfig *pAccessCntrConfig);
/*******************************************************************************
nvUvmInterfaceDisableAccessCntr
This function disables acccess counters
UVM is also returning ownership to the RM: RM can service, enable or
disable this interrupt. In this case, access counter notificaion interrupts
are disabled by this function before it returns.
Arguments:
device[IN] - Device handle associated with the gpu
pAccessCntrInfo[IN] - Pointer to structure filled out by nvUvmInterfaceInitAccessCntrInfo
Error codes:
NV_ERR_GENERIC
NV_ERR_INVALID_ARGUMENT
*/
NV_STATUS nvUvmInterfaceDisableAccessCntr(uvmGpuDeviceHandle device,
UvmGpuAccessCntrInfo *pAccessCntrInfo);
//
// Called by the UVM driver to register operations with RM. Only one set of
// callbacks can be registered by any driver at a time. If another set of
// callbacks was already registered, NV_ERR_IN_USE is returned.
//
NV_STATUS nvUvmInterfaceRegisterUvmCallbacks(struct UvmOpsUvmEvents *importedUvmOps);
//
// Counterpart to nvUvmInterfaceRegisterUvmCallbacks. This must only be called
// if nvUvmInterfaceRegisterUvmCallbacks returned NV_OK.
//
// Upon return, the caller is guaranteed that any outstanding callbacks are done
// and no new ones will be invoked.
//
void nvUvmInterfaceDeRegisterUvmOps(void);
/*******************************************************************************
nvUvmInterfaceP2pObjectCreate
This API creates an NV50_P2P object for the GPUs with the given device
handles, and returns the handle to the object.
Arguments:
device1[IN] - first GPU device handle
device2[IN] - second GPU device handle
hP2pObject[OUT] - handle to the created P2p object.
Error codes:
NV_ERR_INVALID_ARGUMENT
NV_ERR_OBJECT_NOT_FOUND : If device object associated with the uuids aren't found.
*/
NV_STATUS nvUvmInterfaceP2pObjectCreate(uvmGpuDeviceHandle device1,
uvmGpuDeviceHandle device2,
NvHandle *hP2pObject);
/*******************************************************************************
nvUvmInterfaceP2pObjectDestroy
This API destroys the NV50_P2P associated with the passed handle.
Arguments:
session[IN] - Session handle.
hP2pObject[IN] - handle to an P2p object.
Error codes: NONE
*/
void nvUvmInterfaceP2pObjectDestroy(uvmGpuSessionHandle session,
NvHandle hP2pObject);
/*******************************************************************************
nvUvmInterfaceGetExternalAllocPtes
The interface builds the RM PTEs using the provided input parameters.
Arguments:
vaSpace[IN] - vaSpace handle.
hMemory[IN] - Memory handle.
offset [IN] - Offset from the beginning of the allocation
where PTE mappings should begin.
Should be aligned with mappingPagesize
in gpuExternalMappingInfo associated
with the allocation.
size [IN] - Length of the allocation for which PTEs
should be built.
Should be aligned with mappingPagesize
in gpuExternalMappingInfo associated
with the allocation.
size = 0 will be interpreted as the total size
of the allocation.
gpuExternalMappingInfo[IN/OUT] - See nv_uvm_types.h for more information.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_INVALID_OBJECT_HANDLE - Invalid memory handle is passed.
NV_ERR_NOT_SUPPORTED - Functionality is not supported (see comments in nv_gpu_ops.c)
NV_ERR_INVALID_BASE - offset is beyond the allocation size
NV_ERR_INVALID_LIMIT - (offset + size) is beyond the allocation size.
NV_ERR_BUFFER_TOO_SMALL - gpuExternalMappingInfo.pteBufferSize is insufficient to
store single PTE.
NV_ERR_NOT_READY - Returned when querying the PTEs requires a deferred setup
which has not yet completed. It is expected that the caller
will reattempt the call until a different code is returned.
*/
NV_STATUS nvUvmInterfaceGetExternalAllocPtes(uvmGpuAddressSpaceHandle vaSpace,
NvHandle hMemory,
NvU64 offset,
NvU64 size,
UvmGpuExternalMappingInfo *gpuExternalMappingInfo);
/*******************************************************************************
nvUvmInterfaceRetainChannel
Validates and returns information about the user's channel and its resources
(local CTX buffers + global CTX buffers). The state is refcounted and must be
released by calling nvUvmInterfaceReleaseChannel.
Arguments:
vaSpace[IN] - vaSpace handle.
hClient[IN] - Client handle
hChannel[IN] - Channel handle
retainedChannel[OUT] - Opaque pointer to use to refer to this
channel in other nvUvmInterface APIs.
channelInstanceInfo[OUT] - Channel instance information to be filled out.
See nv_uvm_types.h for details.
Error codes:
NV_ERR_INVALID_ARGUMENT : If the parameter/s are invalid.
NV_ERR_OBJECT_NOT_FOUND : If the object associated with the handle isn't found.
NV_ERR_INVALID_CHANNEL : If the channel verification fails.
NV_ERR_INSUFFICIENT_RESOURCES : If no memory available to store the resource information.
*/
NV_STATUS nvUvmInterfaceRetainChannel(uvmGpuAddressSpaceHandle vaSpace,
NvHandle hClient,
NvHandle hChannel,
void **retainedChannel,
UvmGpuChannelInstanceInfo *channelInstanceInfo);
/*******************************************************************************
nvUvmInterfaceBindChannelResources
Associates the mapping address of the channel resources (VAs) provided by the
caller with the channel.
Arguments:
retainedChannel[IN] - Channel pointer returned by nvUvmInterfaceRetainChannel
channelResourceBindParams[IN] - Buffer of initialized UvmGpuChannelInstanceInfo::resourceCount
entries. See nv_uvm_types.h for details.
Error codes:
NV_ERR_INVALID_ARGUMENT : If the parameter/s are invalid.
NV_ERR_OBJECT_NOT_FOUND : If the object associated with the handle aren't found.
NV_ERR_INSUFFICIENT_RESOURCES : If no memory available to store the resource information.
*/
NV_STATUS nvUvmInterfaceBindChannelResources(void *retainedChannel,
UvmGpuChannelResourceBindParams *channelResourceBindParams);
/*******************************************************************************
nvUvmInterfaceReleaseChannel
Releases state retained by nvUvmInterfaceRetainChannel.
*/
void nvUvmInterfaceReleaseChannel(void *retainedChannel);
/*******************************************************************************
nvUvmInterfaceStopChannel
Idles the channel and takes it off the runlist.
Arguments:
retainedChannel[IN] - Channel pointer returned by nvUvmInterfaceRetainChannel
bImmediate[IN] - If true, kill the channel without attempting to wait for it to go idle.
*/
void nvUvmInterfaceStopChannel(void *retainedChannel, NvBool bImmediate);
/*******************************************************************************
nvUvmInterfaceGetChannelResourcePtes
The interface builds the RM PTEs using the provided input parameters.
Arguments:
vaSpace[IN] - vaSpace handle.
resourceDescriptor[IN] - The channel resource descriptor returned by returned by
nvUvmInterfaceRetainChannelResources.
offset[IN] - Offset from the beginning of the allocation
where PTE mappings should begin.
Should be aligned with pagesize associated
with the allocation.
size[IN] - Length of the allocation for which PTEs
should be built.
Should be aligned with pagesize associated
with the allocation.
size = 0 will be interpreted as the total size
of the allocation.
gpuExternalMappingInfo[IN/OUT] - See nv_uvm_types.h for more information.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_INVALID_OBJECT_HANDLE - Invalid memory handle is passed.
NV_ERR_NOT_SUPPORTED - Functionality is not supported.
NV_ERR_INVALID_BASE - offset is beyond the allocation size
NV_ERR_INVALID_LIMIT - (offset + size) is beyond the allocation size.
NV_ERR_BUFFER_TOO_SMALL - gpuExternalMappingInfo.pteBufferSize is insufficient to
store single PTE.
*/
NV_STATUS nvUvmInterfaceGetChannelResourcePtes(uvmGpuAddressSpaceHandle vaSpace,
NvP64 resourceDescriptor,
NvU64 offset,
NvU64 size,
UvmGpuExternalMappingInfo *externalMappingInfo);
/*******************************************************************************
nvUvmInterfaceReportNonReplayableFault
The interface communicates a nonreplayable fault packet from UVM to RM, which
will log the fault, notify the clients and then trigger RC on the channel.
Arguments:
device[IN] - The device where the fault happened.
pFaultPacket[IN] - The opaque pointer from UVM that will be later
converted to a MMU_FAULT_PACKET type.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_NOT_SUPPORTED - Functionality is not supported.
*/
NV_STATUS nvUvmInterfaceReportNonReplayableFault(uvmGpuDeviceHandle device,
const void *pFaultPacket);
/*******************************************************************************
nvUvmInterfacePagingChannelAllocate
In SR-IOV heavy, this function requests the allocation of a paging channel
(i.e. a privileged CE channel) bound to a specified copy engine. Unlike
channels allocated via nvUvmInterfaceChannelAllocate, the caller cannot push
methods to a paging channel directly, but instead relies on the
nvUvmInterfacePagingChannelPushStream API to do so.
SR-IOV heavy only. The implementation of this interface can acquire
RM or GPU locks.
Arguments:
device[IN] - device under which the paging channel will be allocated
allocParams[IN] - structure with allocation settings
channel[OUT] - pointer to the allocated paging channel handle
channelInfo[OUT] - structure filled with channel information
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_NO_MEMORY - Not enough memory to allocate
paging channel/shadow notifier.
NV_ERR_NOT_SUPPORTED - SR-IOV heavy mode is disabled.
*/
NV_STATUS nvUvmInterfacePagingChannelAllocate(uvmGpuDeviceHandle device,
const UvmGpuPagingChannelAllocParams *allocParams,
UvmGpuPagingChannelHandle *channel,
UvmGpuPagingChannelInfo *channelInfo);
/*******************************************************************************
nvUvmInterfacePagingChannelDestroy
This function destroys a given paging channel.
SR-IOV heavy only. The implementation of this interface can acquire
RM or GPU locks.
Arguments:
channel[IN] - paging channel handle. If the passed handle is
the NULL pointer, the function returns immediately.
*/
void nvUvmInterfacePagingChannelDestroy(UvmGpuPagingChannelHandle channel);
/*******************************************************************************
nvUvmInterfacePagingChannelsMap
Map a guest allocation in the address space associated with all the paging
channels allocated under the given device.
SR-IOV heavy only. The implementation of this interface can acquire
RM or GPU locks.
Arguments:
srcVaSpace[IN] - VA space handle used to allocate the input pointer
srcAddress.
srcAddress[IN] - virtual address returned by nvUvmInterfaceMemoryAllocFB
or nvUvmInterfaceMemoryAllocSys. The entire allocation
backing this guest VA is mapped.
device[IN] - device under which paging channels were allocated
dstAddress[OUT] - a virtual address that is valid (i.e. is mapped) in
all the paging channels allocated under the given vaSpace.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_NOT_SUPPORTED - SR-IOV heavy mode is disabled.
*/
NV_STATUS nvUvmInterfacePagingChannelsMap(uvmGpuAddressSpaceHandle srcVaSpace,
UvmGpuPointer srcAddress,
uvmGpuDeviceHandle device,
NvU64 *dstAddress);
/*******************************************************************************
nvUvmInterfacePagingChannelsUnmap
Unmap a VA returned by nvUvmInterfacePagingChannelsMap.
SR-IOV heavy only. The implementation of this interface can acquire
RM or GPU locks.
Arguments:
srcVaSpace[IN] - VA space handle that was passed to prevous mapping.
srcAddress[IN] - virtual address that was passed to prevous mapping.
device[IN] - device under which paging channels were allocated.
*/
void nvUvmInterfacePagingChannelsUnmap(uvmGpuAddressSpaceHandle srcVaSpace,
UvmGpuPointer srcAddress,
uvmGpuDeviceHandle device);
/*******************************************************************************
nvUvmInterfacePagingChannelPushStream
Used for remote execution of the passed methods; the UVM driver uses this
interface to ask the vGPU plugin to execute certain HW methods on its
behalf. The callee should push the methods in the specified order i.e. is
not allowed to do any reordering.
The API is asynchronous. The UVM driver can wait on the remote execution by
inserting a semaphore release method at the end of the method stream, and
then loop until the semaphore value reaches the completion value indicated
in the release method.
The valid HW methods that can be passed by the UVM driver follow; the source
functions listed contain the exact formatting (encoding) of the HW method
used by the UVM driver for Ampere.
- TLB invalidation targeting a VA range. See
uvm_hal_volta_host_tlb_invalidate_va.
- TLB invalidation targeting certain levels in the page tree (including
the possibility of invalidating everything).
See uvm_hal_pascal_host_tlb_invalidate_all.
- Replayable fault replay. See uvm_hal_volta_replay_faults.
- Replayable fault cancellation targeting a guest virtual address. See
uvm_hal_volta_cancel_faults_va
- Membar, scoped to device or to the entire system. See
uvm_hal_pascal_host_membar_gpu and uvm_hal_pascal_host_membar_sys
- Host semaphore acquire, see uvm_hal_turing_host_semaphore_acquire. The
virtual address specified in the semaphore operation must lie within a
buffer previously mapped by nvUvmInterfacePagingChannelsMap.
- CE semaphore release, see uvm_hal_pascal_ce_semaphore_release. The
virtual address specified in the semaphore operation must lie within a
buffer previously mapped by nvUvmInterfacePagingChannelsMap.
- 64 bits-wide memset, see uvm_hal_kepler_ce_memset_8. The destination
address is a physical address in vidmem.
- No-op, see uvm_hal_kepler_host_noop. Used to store the source buffer
of a memcopy method within the input stream itself.
- Memcopy, see uvm_hal_kepler_ce_memcopy. The destination address is a
physical address in vidmem. The source address is an offset within
methodStream, in bytes, indicating the location of the (inlined) source
buffer. The copy size does not exceed 4KB.
- CE semaphore release with timestamp, see
uvm_hal_kepler_ce_semaphore_timestamp. The virtual address specified in
the semaphore operation must lie within a buffer previously mapped by
nvUvmInterfacePagingChannelsMap.
- CE semaphore reduction, see uvm_hal_kepler_ce_semaphore_reduction_inc.
The virtual address specified in the semaphore operation must lie within
a buffer previously mapped by nvUvmInterfacePagingChannelsMap.
Only invoked in SR-IOV heavy mode.
NOTES:
- This function uses a pre-allocated stack per paging channel
(stored in the UvmGpuPagingChannel object)
- This function DOES NOT acquire the RM API or GPU locks. That is because
it is called during fault servicing, which could produce deadlocks.
- Concurrent calls to this function using channels under same device are not
allowed due to:
a. pre-allocated stack
b. the fact that internal RPC infrastructure doesn't acquire GPU lock.
Therefore, locking is the caller's responsibility.
Arguments:
channel[IN] - paging channel handle obtained via
nvUvmInterfacePagingChannelAllocate
methodStream[IN] - HW methods to be pushed to the paging channel.
methodStreamSize[IN] - Size of methodStream, in bytes. The maximum push
size is 128KB.
Error codes:
NV_ERR_INVALID_ARGUMENT - Invalid parameter/s is passed.
NV_ERR_NOT_SUPPORTED - SR-IOV heavy mode is disabled.
*/
NV_STATUS nvUvmInterfacePagingChannelPushStream(UvmGpuPagingChannelHandle channel,
char *methodStream,
NvU32 methodStreamSize);
/*******************************************************************************
CSL Interface and Locking
The following functions do not acquire the RM API or GPU locks and must not be called
concurrently with the same UvmCslContext parameter in different threads. The caller must
guarantee this exclusion.
* nvUvmInterfaceCslRotateIv
* nvUvmInterfaceCslEncrypt
* nvUvmInterfaceCslDecrypt
* nvUvmInterfaceCslSign
* nvUvmInterfaceCslQueryMessagePool
* nvUvmInterfaceCslIncrementIv
*/
/*******************************************************************************
nvUvmInterfaceCslInitContext
Allocates and initializes a CSL context for a given secure channel.
The lifetime of the context is the same as the lifetime of the secure channel
it is paired with.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
channel[IN] - Handle to a secure channel.
Error codes:
NV_ERR_INVALID_STATE - The system is not operating in Confidential Compute mode.
NV_ERR_INVALID_CHANNEL - The associated channel is not a secure channel.
NV_ERR_IN_USE - The context has already been initialized.
*/
NV_STATUS nvUvmInterfaceCslInitContext(UvmCslContext *uvmCslContext,
uvmGpuChannelHandle channel);
/*******************************************************************************
nvUvmInterfaceDeinitCslContext
Securely deinitializes and clears the contents of a context.
If context is already deinitialized then function returns immediately.
Arguments:
uvmCslContext[IN] - The CSL context.
*/
void nvUvmInterfaceDeinitCslContext(UvmCslContext *uvmCslContext);
/*******************************************************************************
nvUvmInterfaceCslRotateIv
Rotates the IV for a given channel and operation.
This function will rotate the IV on both the CPU and the GPU.
Outstanding messages that have been encrypted by the GPU should first be
decrypted before calling this function with operation equal to
UVM_CSL_OPERATION_DECRYPT. Similarly, outstanding messages that have been
encrypted by the CPU should first be decrypted before calling this function
with operation equal to UVM_CSL_OPERATION_ENCRYPT. For a given operation
the channel must be idle before calling this function. This function can be
called regardless of the value of the IV's message counter.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
operation[IN] - Either
- UVM_CSL_OPERATION_ENCRYPT
- UVM_CSL_OPERATION_DECRYPT
Error codes:
NV_ERR_INSUFFICIENT_RESOURCES - The rotate operation would cause a counter
to overflow.
NV_ERR_INVALID_ARGUMENT - Invalid value for operation.
*/
NV_STATUS nvUvmInterfaceCslRotateIv(UvmCslContext *uvmCslContext,
UvmCslOperation operation);
/*******************************************************************************
nvUvmInterfaceCslEncrypt
Encrypts data and produces an authentication tag.
Auth, input, and output buffers must not overlap. If they do then calling
this function produces undefined behavior. Performance is typically
maximized when the input and output buffers are 16-byte aligned. This is
natural alignment for AES block.
The encryptIV can be obtained from nvUvmInterfaceCslIncrementIv.
However, it is optional. If it is NULL, the next IV in line will be used.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
bufferSize[IN] - Size of the input and output buffers in
units of bytes. Value can range from 1 byte
to (2^32) - 1 bytes.
inputBuffer[IN] - Address of plaintext input buffer.
encryptIv[IN/OUT] - IV to use for encryption. Can be NULL.
outputBuffer[OUT] - Address of ciphertext output buffer.
authTagBuffer[OUT] - Address of authentication tag buffer.
Its size is UVM_CSL_CRYPT_AUTH_TAG_SIZE_BYTES.
Error codes:
NV_ERR_INVALID_ARGUMENT - The size of the data is 0 bytes.
- The encryptIv has already been used.
*/
NV_STATUS nvUvmInterfaceCslEncrypt(UvmCslContext *uvmCslContext,
NvU32 bufferSize,
NvU8 const *inputBuffer,
UvmCslIv *encryptIv,
NvU8 *outputBuffer,
NvU8 *authTagBuffer);
/*******************************************************************************
nvUvmInterfaceCslDecrypt
Verifies the authentication tag and decrypts data.
Auth, input, and output buffers must not overlap. If they do then calling
this function produces undefined behavior. Performance is typically
maximized when the input and output buffers are 16-byte aligned. This is
natural alignment for AES block.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
bufferSize[IN] - Size of the input and output buffers in units of bytes.
Value can range from 1 byte to (2^32) - 1 bytes.
decryptIv[IN] - IV used to decrypt the ciphertext. Its value can either be given by
nvUvmInterfaceCslIncrementIv, or, if NULL, the CSL context's
internal counter is used.
inputBuffer[IN] - Address of ciphertext input buffer.
outputBuffer[OUT] - Address of plaintext output buffer.
addAuthData[IN] - Address of the plaintext additional authenticated data used to
calculate the authentication tag. Can be NULL.
addAuthDataSize[IN] - Size of the additional authenticated data in units of bytes.
Value can range from 1 byte to (2^32) - 1 bytes.
This parameter is ignored if addAuthData is NULL.
authTagBuffer[IN] - Address of authentication tag buffer.
Its size is UVM_CSL_CRYPT_AUTH_TAG_SIZE_BYTES.
Error codes:
NV_ERR_INSUFFICIENT_RESOURCES - The decryption operation would cause a
counter overflow to occur.
NV_ERR_INVALID_ARGUMENT - The size of the data is 0 bytes.
NV_ERR_INVALID_DATA - Verification of the authentication tag fails.
*/
NV_STATUS nvUvmInterfaceCslDecrypt(UvmCslContext *uvmCslContext,
NvU32 bufferSize,
NvU8 const *inputBuffer,
UvmCslIv const *decryptIv,
NvU8 *outputBuffer,
NvU8 const *addAuthData,
NvU32 addAuthDataSize,
NvU8 const *authTagBuffer);
/*******************************************************************************
nvUvmInterfaceCslSign
Generates an authentication tag for secure work launch.
Auth and input buffers must not overlap. If they do then calling this function produces
undefined behavior.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
bufferSize[IN] - Size of the input buffer in units of bytes.
Value can range from 1 byte to (2^32) - 1 bytes.
inputBuffer[IN] - Address of plaintext input buffer.
authTagBuffer[OUT] - Address of authentication tag buffer.
Its size is UVM_CSL_SIGN_AUTH_TAG_SIZE_BYTES.
Error codes:
NV_ERR_INSUFFICIENT_RESOURCES - The signing operation would cause a counter overflow to occur.
NV_ERR_INVALID_ARGUMENT - The size of the data is 0 bytes.
*/
NV_STATUS nvUvmInterfaceCslSign(UvmCslContext *uvmCslContext,
NvU32 bufferSize,
NvU8 const *inputBuffer,
NvU8 *authTagBuffer);
/*******************************************************************************
nvUvmInterfaceCslQueryMessagePool
Returns the number of messages that can be encrypted before the message counter will overflow.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
operation[IN] - Either UVM_CSL_OPERATION_ENCRYPT or UVM_CSL_OPERATION_DECRYPT.
messageNum[OUT] - Number of messages left before overflow.
Error codes:
NV_ERR_INVALID_ARGUMENT - The value of the operation parameter is illegal.
*/
NV_STATUS nvUvmInterfaceCslQueryMessagePool(UvmCslContext *uvmCslContext,
UvmCslOperation operation,
NvU64 *messageNum);
/*******************************************************************************
nvUvmInterfaceCslIncrementIv
Increments the message counter by the specified amount.
If iv is non-NULL then the incremented value is returned.
If operation is UVM_CSL_OPERATION_ENCRYPT then the returned IV's "freshness" bit is set and
can be used in nvUvmInterfaceCslEncrypt. If operation is UVM_CSL_OPERATION_DECRYPT then
the returned IV can be used in nvUvmInterfaceCslDecrypt.
See "CSL Interface and Locking" for locking requirements.
This function does not perform dynamic memory allocation.
Arguments:
uvmCslContext[IN/OUT] - The CSL context.
operation[IN] - Either
- UVM_CSL_OPERATION_ENCRYPT
- UVM_CSL_OPERATION_DECRYPT
increment[IN] - The amount by which the IV is incremented. Can be 0.
iv[out] - If non-NULL, a buffer to store the incremented IV.
Error codes:
NV_ERR_INVALID_ARGUMENT - The value of the operation parameter is illegal.
NV_ERR_INSUFFICIENT_RESOURCES - Incrementing the message counter would result
in an overflow.
*/
NV_STATUS nvUvmInterfaceCslIncrementIv(UvmCslContext *uvmCslContext,
UvmCslOperation operation,
NvU64 increment,
UvmCslIv *iv);
#endif // _NV_UVM_INTERFACE_H_
|