1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572
|
// SPDX-License-Identifier: GPL-2.0-only OR MIT
/*
* Copyright © 2024 Intel Corporation
*
* Authors:
* Matthew Brost <matthew.brost@intel.com>
*/
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <linux/hmm.h>
#include <linux/hugetlb_inline.h>
#include <linux/memremap.h>
#include <linux/mm_types.h>
#include <linux/slab.h>
#include <drm/drm_device.h>
#include <drm/drm_gpusvm.h>
#include <drm/drm_pagemap.h>
#include <drm/drm_print.h>
/**
* DOC: Overview
*
* GPU Shared Virtual Memory (GPU SVM) layer for the Direct Rendering Manager (DRM)
* is a component of the DRM framework designed to manage shared virtual memory
* between the CPU and GPU. It enables efficient data exchange and processing
* for GPU-accelerated applications by allowing memory sharing and
* synchronization between the CPU's and GPU's virtual address spaces.
*
* Key GPU SVM Components:
*
* - Notifiers:
* Used for tracking memory intervals and notifying the GPU of changes,
* notifiers are sized based on a GPU SVM initialization parameter, with a
* recommendation of 512M or larger. They maintain a Red-BlacK tree and a
* list of ranges that fall within the notifier interval. Notifiers are
* tracked within a GPU SVM Red-BlacK tree and list and are dynamically
* inserted or removed as ranges within the interval are created or
* destroyed.
* - Ranges:
* Represent memory ranges mapped in a DRM device and managed by GPU SVM.
* They are sized based on an array of chunk sizes, which is a GPU SVM
* initialization parameter, and the CPU address space. Upon GPU fault,
* the largest aligned chunk that fits within the faulting CPU address
* space is chosen for the range size. Ranges are expected to be
* dynamically allocated on GPU fault and removed on an MMU notifier UNMAP
* event. As mentioned above, ranges are tracked in a notifier's Red-Black
* tree.
*
* - Operations:
* Define the interface for driver-specific GPU SVM operations such as
* range allocation, notifier allocation, and invalidations.
*
* - Device Memory Allocations:
* Embedded structure containing enough information for GPU SVM to migrate
* to / from device memory.
*
* - Device Memory Operations:
* Define the interface for driver-specific device memory operations
* release memory, populate pfns, and copy to / from device memory.
*
* This layer provides interfaces for allocating, mapping, migrating, and
* releasing memory ranges between the CPU and GPU. It handles all core memory
* management interactions (DMA mapping, HMM, and migration) and provides
* driver-specific virtual functions (vfuncs). This infrastructure is sufficient
* to build the expected driver components for an SVM implementation as detailed
* below.
*
* Expected Driver Components:
*
* - GPU page fault handler:
* Used to create ranges and notifiers based on the fault address,
* optionally migrate the range to device memory, and create GPU bindings.
*
* - Garbage collector:
* Used to unmap and destroy GPU bindings for ranges. Ranges are expected
* to be added to the garbage collector upon a MMU_NOTIFY_UNMAP event in
* notifier callback.
*
* - Notifier callback:
* Used to invalidate and DMA unmap GPU bindings for ranges.
*/
/**
* DOC: Locking
*
* GPU SVM handles locking for core MM interactions, i.e., it locks/unlocks the
* mmap lock as needed.
*
* GPU SVM introduces a global notifier lock, which safeguards the notifier's
* range RB tree and list, as well as the range's DMA mappings and sequence
* number. GPU SVM manages all necessary locking and unlocking operations,
* except for the recheck range's pages being valid
* (drm_gpusvm_range_pages_valid) when the driver is committing GPU bindings.
* This lock corresponds to the ``driver->update`` lock mentioned in
* Documentation/mm/hmm.rst. Future revisions may transition from a GPU SVM
* global lock to a per-notifier lock if finer-grained locking is deemed
* necessary.
*
* In addition to the locking mentioned above, the driver should implement a
* lock to safeguard core GPU SVM function calls that modify state, such as
* drm_gpusvm_range_find_or_insert and drm_gpusvm_range_remove. This lock is
* denoted as 'driver_svm_lock' in code examples. Finer grained driver side
* locking should also be possible for concurrent GPU fault processing within a
* single GPU SVM. The 'driver_svm_lock' can be via drm_gpusvm_driver_set_lock
* to add annotations to GPU SVM.
*/
/**
* DOC: Partial Unmapping of Ranges
*
* Partial unmapping of ranges (e.g., 1M out of 2M is unmapped by CPU resulting
* in MMU_NOTIFY_UNMAP event) presents several challenges, with the main one
* being that a subset of the range still has CPU and GPU mappings. If the
* backing store for the range is in device memory, a subset of the backing
* store has references. One option would be to split the range and device
* memory backing store, but the implementation for this would be quite
* complicated. Given that partial unmappings are rare and driver-defined range
* sizes are relatively small, GPU SVM does not support splitting of ranges.
*
* With no support for range splitting, upon partial unmapping of a range, the
* driver is expected to invalidate and destroy the entire range. If the range
* has device memory as its backing, the driver is also expected to migrate any
* remaining pages back to RAM.
*/
/**
* DOC: Examples
*
* This section provides three examples of how to build the expected driver
* components: the GPU page fault handler, the garbage collector, and the
* notifier callback.
*
* The generic code provided does not include logic for complex migration
* policies, optimized invalidations, fined grained driver locking, or other
* potentially required driver locking (e.g., DMA-resv locks).
*
* 1) GPU page fault handler
*
* .. code-block:: c
*
* int driver_bind_range(struct drm_gpusvm *gpusvm, struct drm_gpusvm_range *range)
* {
* int err = 0;
*
* driver_alloc_and_setup_memory_for_bind(gpusvm, range);
*
* drm_gpusvm_notifier_lock(gpusvm);
* if (drm_gpusvm_range_pages_valid(range))
* driver_commit_bind(gpusvm, range);
* else
* err = -EAGAIN;
* drm_gpusvm_notifier_unlock(gpusvm);
*
* return err;
* }
*
* int driver_gpu_fault(struct drm_gpusvm *gpusvm, unsigned long fault_addr,
* unsigned long gpuva_start, unsigned long gpuva_end)
* {
* struct drm_gpusvm_ctx ctx = {};
* int err;
*
* driver_svm_lock();
* retry:
* // Always process UNMAPs first so view of GPU SVM ranges is current
* driver_garbage_collector(gpusvm);
*
* range = drm_gpusvm_range_find_or_insert(gpusvm, fault_addr,
* gpuva_start, gpuva_end,
* &ctx);
* if (IS_ERR(range)) {
* err = PTR_ERR(range);
* goto unlock;
* }
*
* if (driver_migration_policy(range)) {
* err = drm_pagemap_populate_mm(driver_choose_drm_pagemap(),
* gpuva_start, gpuva_end, gpusvm->mm,
* ctx->timeslice_ms);
* if (err) // CPU mappings may have changed
* goto retry;
* }
*
* err = drm_gpusvm_range_get_pages(gpusvm, range, &ctx);
* if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) { // CPU mappings changed
* if (err == -EOPNOTSUPP)
* drm_gpusvm_range_evict(gpusvm, range);
* goto retry;
* } else if (err) {
* goto unlock;
* }
*
* err = driver_bind_range(gpusvm, range);
* if (err == -EAGAIN) // CPU mappings changed
* goto retry
*
* unlock:
* driver_svm_unlock();
* return err;
* }
*
* 2) Garbage Collector
*
* .. code-block:: c
*
* void __driver_garbage_collector(struct drm_gpusvm *gpusvm,
* struct drm_gpusvm_range *range)
* {
* assert_driver_svm_locked(gpusvm);
*
* // Partial unmap, migrate any remaining device memory pages back to RAM
* if (range->flags.partial_unmap)
* drm_gpusvm_range_evict(gpusvm, range);
*
* driver_unbind_range(range);
* drm_gpusvm_range_remove(gpusvm, range);
* }
*
* void driver_garbage_collector(struct drm_gpusvm *gpusvm)
* {
* assert_driver_svm_locked(gpusvm);
*
* for_each_range_in_garbage_collector(gpusvm, range)
* __driver_garbage_collector(gpusvm, range);
* }
*
* 3) Notifier callback
*
* .. code-block:: c
*
* void driver_invalidation(struct drm_gpusvm *gpusvm,
* struct drm_gpusvm_notifier *notifier,
* const struct mmu_notifier_range *mmu_range)
* {
* struct drm_gpusvm_ctx ctx = { .in_notifier = true, };
* struct drm_gpusvm_range *range = NULL;
*
* driver_invalidate_device_pages(gpusvm, mmu_range->start, mmu_range->end);
*
* drm_gpusvm_for_each_range(range, notifier, mmu_range->start,
* mmu_range->end) {
* drm_gpusvm_range_unmap_pages(gpusvm, range, &ctx);
*
* if (mmu_range->event != MMU_NOTIFY_UNMAP)
* continue;
*
* drm_gpusvm_range_set_unmapped(range, mmu_range);
* driver_garbage_collector_add(gpusvm, range);
* }
* }
*/
/**
* npages_in_range() - Calculate the number of pages in a given range
* @start: The start address of the range
* @end: The end address of the range
*
* This macro calculates the number of pages in a given memory range,
* specified by the start and end addresses. It divides the difference
* between the end and start addresses by the page size (PAGE_SIZE) to
* determine the number of pages in the range.
*
* Return: The number of pages in the specified range.
*/
static unsigned long
npages_in_range(unsigned long start, unsigned long end)
{
return (end - start) >> PAGE_SHIFT;
}
/**
* drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
* @notifier: Pointer to the GPU SVM notifier structure.
* @start: Start address of the range
* @end: End address of the range
*
* Return: A pointer to the drm_gpusvm_range if found or NULL
*/
struct drm_gpusvm_range *
drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
unsigned long end)
{
struct interval_tree_node *itree;
itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
if (itree)
return container_of(itree, struct drm_gpusvm_range, itree);
else
return NULL;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
/**
* drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
* @range__: Iterator variable for the ranges
* @next__: Iterator variable for the ranges temporay storage
* @notifier__: Pointer to the GPU SVM notifier
* @start__: Start address of the range
* @end__: End address of the range
*
* This macro is used to iterate over GPU SVM ranges in a notifier while
* removing ranges from it.
*/
#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
(next__) = __drm_gpusvm_range_next(range__); \
(range__) && (drm_gpusvm_range_start(range__) < (end__)); \
(range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
/**
* __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
* @notifier: a pointer to the current drm_gpusvm_notifier
*
* Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
* the current notifier is the last one or if the input notifier is
* NULL.
*/
static struct drm_gpusvm_notifier *
__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
{
if (notifier && !list_is_last(¬ifier->entry,
¬ifier->gpusvm->notifier_list))
return list_next_entry(notifier, entry);
return NULL;
}
static struct drm_gpusvm_notifier *
notifier_iter_first(struct rb_root_cached *root, unsigned long start,
unsigned long last)
{
struct interval_tree_node *itree;
itree = interval_tree_iter_first(root, start, last);
if (itree)
return container_of(itree, struct drm_gpusvm_notifier, itree);
else
return NULL;
}
/**
* drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
* @notifier__: Iterator variable for the notifiers
* @notifier__: Pointer to the GPU SVM notifier
* @start__: Start address of the notifier
* @end__: End address of the notifier
*
* This macro is used to iterate over GPU SVM notifiers in a gpusvm.
*/
#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
(notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
(notifier__) = __drm_gpusvm_notifier_next(notifier__))
/**
* drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
* @notifier__: Iterator variable for the notifiers
* @next__: Iterator variable for the notifiers temporay storage
* @notifier__: Pointer to the GPU SVM notifier
* @start__: Start address of the notifier
* @end__: End address of the notifier
*
* This macro is used to iterate over GPU SVM notifiers in a gpusvm while
* removing notifiers from it.
*/
#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
(next__) = __drm_gpusvm_notifier_next(notifier__); \
(notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
(notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
/**
* drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
* @mni: Pointer to the mmu_interval_notifier structure.
* @mmu_range: Pointer to the mmu_notifier_range structure.
* @cur_seq: Current sequence number.
*
* This function serves as a generic MMU notifier for GPU SVM. It sets the MMU
* notifier sequence number and calls the driver invalidate vfunc under
* gpusvm->notifier_lock.
*
* Return: true if the operation succeeds, false otherwise.
*/
static bool
drm_gpusvm_notifier_invalidate(struct mmu_interval_notifier *mni,
const struct mmu_notifier_range *mmu_range,
unsigned long cur_seq)
{
struct drm_gpusvm_notifier *notifier =
container_of(mni, typeof(*notifier), notifier);
struct drm_gpusvm *gpusvm = notifier->gpusvm;
if (!mmu_notifier_range_blockable(mmu_range))
return false;
down_write(&gpusvm->notifier_lock);
mmu_interval_set_seq(mni, cur_seq);
gpusvm->ops->invalidate(gpusvm, notifier, mmu_range);
up_write(&gpusvm->notifier_lock);
return true;
}
/*
* drm_gpusvm_notifier_ops - MMU interval notifier operations for GPU SVM
*/
static const struct mmu_interval_notifier_ops drm_gpusvm_notifier_ops = {
.invalidate = drm_gpusvm_notifier_invalidate,
};
/**
* drm_gpusvm_init() - Initialize the GPU SVM.
* @gpusvm: Pointer to the GPU SVM structure.
* @name: Name of the GPU SVM.
* @drm: Pointer to the DRM device structure.
* @mm: Pointer to the mm_struct for the address space.
* @device_private_page_owner: Device private pages owner.
* @mm_start: Start address of GPU SVM.
* @mm_range: Range of the GPU SVM.
* @notifier_size: Size of individual notifiers.
* @ops: Pointer to the operations structure for GPU SVM.
* @chunk_sizes: Pointer to the array of chunk sizes used in range allocation.
* Entries should be powers of 2 in descending order with last
* entry being SZ_4K.
* @num_chunks: Number of chunks.
*
* This function initializes the GPU SVM.
*
* Return: 0 on success, a negative error code on failure.
*/
int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
const char *name, struct drm_device *drm,
struct mm_struct *mm, void *device_private_page_owner,
unsigned long mm_start, unsigned long mm_range,
unsigned long notifier_size,
const struct drm_gpusvm_ops *ops,
const unsigned long *chunk_sizes, int num_chunks)
{
if (!ops->invalidate || !num_chunks)
return -EINVAL;
gpusvm->name = name;
gpusvm->drm = drm;
gpusvm->mm = mm;
gpusvm->device_private_page_owner = device_private_page_owner;
gpusvm->mm_start = mm_start;
gpusvm->mm_range = mm_range;
gpusvm->notifier_size = notifier_size;
gpusvm->ops = ops;
gpusvm->chunk_sizes = chunk_sizes;
gpusvm->num_chunks = num_chunks;
mmgrab(mm);
gpusvm->root = RB_ROOT_CACHED;
INIT_LIST_HEAD(&gpusvm->notifier_list);
init_rwsem(&gpusvm->notifier_lock);
fs_reclaim_acquire(GFP_KERNEL);
might_lock(&gpusvm->notifier_lock);
fs_reclaim_release(GFP_KERNEL);
#ifdef CONFIG_LOCKDEP
gpusvm->lock_dep_map = NULL;
#endif
return 0;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_init);
/**
* drm_gpusvm_notifier_find() - Find GPU SVM notifier
* @gpusvm: Pointer to the GPU SVM structure
* @fault_addr: Fault address
*
* This function finds the GPU SVM notifier associated with the fault address.
*
* Return: Pointer to the GPU SVM notifier on success, NULL otherwise.
*/
static struct drm_gpusvm_notifier *
drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm,
unsigned long fault_addr)
{
return notifier_iter_first(&gpusvm->root, fault_addr, fault_addr + 1);
}
/**
* to_drm_gpusvm_notifier() - retrieve the container struct for a given rbtree node
* @node: a pointer to the rbtree node embedded within a drm_gpusvm_notifier struct
*
* Return: A pointer to the containing drm_gpusvm_notifier structure.
*/
static struct drm_gpusvm_notifier *to_drm_gpusvm_notifier(struct rb_node *node)
{
return container_of(node, struct drm_gpusvm_notifier, itree.rb);
}
/**
* drm_gpusvm_notifier_insert() - Insert GPU SVM notifier
* @gpusvm: Pointer to the GPU SVM structure
* @notifier: Pointer to the GPU SVM notifier structure
*
* This function inserts the GPU SVM notifier into the GPU SVM RB tree and list.
*/
static void drm_gpusvm_notifier_insert(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier)
{
struct rb_node *node;
struct list_head *head;
interval_tree_insert(¬ifier->itree, &gpusvm->root);
node = rb_prev(¬ifier->itree.rb);
if (node)
head = &(to_drm_gpusvm_notifier(node))->entry;
else
head = &gpusvm->notifier_list;
list_add(¬ifier->entry, head);
}
/**
* drm_gpusvm_notifier_remove() - Remove GPU SVM notifier
* @gpusvm: Pointer to the GPU SVM tructure
* @notifier: Pointer to the GPU SVM notifier structure
*
* This function removes the GPU SVM notifier from the GPU SVM RB tree and list.
*/
static void drm_gpusvm_notifier_remove(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier)
{
interval_tree_remove(¬ifier->itree, &gpusvm->root);
list_del(¬ifier->entry);
}
/**
* drm_gpusvm_fini() - Finalize the GPU SVM.
* @gpusvm: Pointer to the GPU SVM structure.
*
* This function finalizes the GPU SVM by cleaning up any remaining ranges and
* notifiers, and dropping a reference to struct MM.
*/
void drm_gpusvm_fini(struct drm_gpusvm *gpusvm)
{
struct drm_gpusvm_notifier *notifier, *next;
drm_gpusvm_for_each_notifier_safe(notifier, next, gpusvm, 0, LONG_MAX) {
struct drm_gpusvm_range *range, *__next;
/*
* Remove notifier first to avoid racing with any invalidation
*/
mmu_interval_notifier_remove(¬ifier->notifier);
notifier->flags.removed = true;
drm_gpusvm_for_each_range_safe(range, __next, notifier, 0,
LONG_MAX)
drm_gpusvm_range_remove(gpusvm, range);
}
mmdrop(gpusvm->mm);
WARN_ON(!RB_EMPTY_ROOT(&gpusvm->root.rb_root));
}
EXPORT_SYMBOL_GPL(drm_gpusvm_fini);
/**
* drm_gpusvm_notifier_alloc() - Allocate GPU SVM notifier
* @gpusvm: Pointer to the GPU SVM structure
* @fault_addr: Fault address
*
* This function allocates and initializes the GPU SVM notifier structure.
*
* Return: Pointer to the allocated GPU SVM notifier on success, ERR_PTR() on failure.
*/
static struct drm_gpusvm_notifier *
drm_gpusvm_notifier_alloc(struct drm_gpusvm *gpusvm, unsigned long fault_addr)
{
struct drm_gpusvm_notifier *notifier;
if (gpusvm->ops->notifier_alloc)
notifier = gpusvm->ops->notifier_alloc();
else
notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
if (!notifier)
return ERR_PTR(-ENOMEM);
notifier->gpusvm = gpusvm;
notifier->itree.start = ALIGN_DOWN(fault_addr, gpusvm->notifier_size);
notifier->itree.last = ALIGN(fault_addr + 1, gpusvm->notifier_size) - 1;
INIT_LIST_HEAD(¬ifier->entry);
notifier->root = RB_ROOT_CACHED;
INIT_LIST_HEAD(¬ifier->range_list);
return notifier;
}
/**
* drm_gpusvm_notifier_free() - Free GPU SVM notifier
* @gpusvm: Pointer to the GPU SVM structure
* @notifier: Pointer to the GPU SVM notifier structure
*
* This function frees the GPU SVM notifier structure.
*/
static void drm_gpusvm_notifier_free(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier)
{
WARN_ON(!RB_EMPTY_ROOT(¬ifier->root.rb_root));
if (gpusvm->ops->notifier_free)
gpusvm->ops->notifier_free(notifier);
else
kfree(notifier);
}
/**
* to_drm_gpusvm_range() - retrieve the container struct for a given rbtree node
* @node: a pointer to the rbtree node embedded within a drm_gpusvm_range struct
*
* Return: A pointer to the containing drm_gpusvm_range structure.
*/
static struct drm_gpusvm_range *to_drm_gpusvm_range(struct rb_node *node)
{
return container_of(node, struct drm_gpusvm_range, itree.rb);
}
/**
* drm_gpusvm_range_insert() - Insert GPU SVM range
* @notifier: Pointer to the GPU SVM notifier structure
* @range: Pointer to the GPU SVM range structure
*
* This function inserts the GPU SVM range into the notifier RB tree and list.
*/
static void drm_gpusvm_range_insert(struct drm_gpusvm_notifier *notifier,
struct drm_gpusvm_range *range)
{
struct rb_node *node;
struct list_head *head;
drm_gpusvm_notifier_lock(notifier->gpusvm);
interval_tree_insert(&range->itree, ¬ifier->root);
node = rb_prev(&range->itree.rb);
if (node)
head = &(to_drm_gpusvm_range(node))->entry;
else
head = ¬ifier->range_list;
list_add(&range->entry, head);
drm_gpusvm_notifier_unlock(notifier->gpusvm);
}
/**
* __drm_gpusvm_range_remove() - Remove GPU SVM range
* @notifier: Pointer to the GPU SVM notifier structure
* @range: Pointer to the GPU SVM range structure
*
* This macro removes the GPU SVM range from the notifier RB tree and list.
*/
static void __drm_gpusvm_range_remove(struct drm_gpusvm_notifier *notifier,
struct drm_gpusvm_range *range)
{
interval_tree_remove(&range->itree, ¬ifier->root);
list_del(&range->entry);
}
/**
* drm_gpusvm_range_alloc() - Allocate GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @notifier: Pointer to the GPU SVM notifier structure
* @fault_addr: Fault address
* @chunk_size: Chunk size
* @migrate_devmem: Flag indicating whether to migrate device memory
*
* This function allocates and initializes the GPU SVM range structure.
*
* Return: Pointer to the allocated GPU SVM range on success, ERR_PTR() on failure.
*/
static struct drm_gpusvm_range *
drm_gpusvm_range_alloc(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier,
unsigned long fault_addr, unsigned long chunk_size,
bool migrate_devmem)
{
struct drm_gpusvm_range *range;
if (gpusvm->ops->range_alloc)
range = gpusvm->ops->range_alloc(gpusvm);
else
range = kzalloc(sizeof(*range), GFP_KERNEL);
if (!range)
return ERR_PTR(-ENOMEM);
kref_init(&range->refcount);
range->gpusvm = gpusvm;
range->notifier = notifier;
range->itree.start = ALIGN_DOWN(fault_addr, chunk_size);
range->itree.last = ALIGN(fault_addr + 1, chunk_size) - 1;
INIT_LIST_HEAD(&range->entry);
range->notifier_seq = LONG_MAX;
range->flags.migrate_devmem = migrate_devmem ? 1 : 0;
return range;
}
/**
* drm_gpusvm_check_pages() - Check pages
* @gpusvm: Pointer to the GPU SVM structure
* @notifier: Pointer to the GPU SVM notifier structure
* @start: Start address
* @end: End address
*
* Check if pages between start and end have been faulted in on the CPU. Use to
* prevent migration of pages without CPU backing store.
*
* Return: True if pages have been faulted into CPU, False otherwise
*/
static bool drm_gpusvm_check_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier,
unsigned long start, unsigned long end)
{
struct hmm_range hmm_range = {
.default_flags = 0,
.notifier = ¬ifier->notifier,
.start = start,
.end = end,
.dev_private_owner = gpusvm->device_private_page_owner,
};
unsigned long timeout =
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
unsigned long *pfns;
unsigned long npages = npages_in_range(start, end);
int err, i;
mmap_assert_locked(gpusvm->mm);
pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
if (!pfns)
return false;
hmm_range.notifier_seq = mmu_interval_read_begin(¬ifier->notifier);
hmm_range.hmm_pfns = pfns;
while (true) {
err = hmm_range_fault(&hmm_range);
if (err == -EBUSY) {
if (time_after(jiffies, timeout))
break;
hmm_range.notifier_seq =
mmu_interval_read_begin(¬ifier->notifier);
continue;
}
break;
}
if (err)
goto err_free;
for (i = 0; i < npages;) {
if (!(pfns[i] & HMM_PFN_VALID)) {
err = -EFAULT;
goto err_free;
}
i += 0x1 << hmm_pfn_to_map_order(pfns[i]);
}
err_free:
kvfree(pfns);
return err ? false : true;
}
/**
* drm_gpusvm_range_chunk_size() - Determine chunk size for GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @notifier: Pointer to the GPU SVM notifier structure
* @vas: Pointer to the virtual memory area structure
* @fault_addr: Fault address
* @gpuva_start: Start address of GPUVA which mirrors CPU
* @gpuva_end: End address of GPUVA which mirrors CPU
* @check_pages_threshold: Check CPU pages for present threshold
*
* This function determines the chunk size for the GPU SVM range based on the
* fault address, GPU SVM chunk sizes, existing GPU SVM ranges, and the virtual
* memory area boundaries.
*
* Return: Chunk size on success, LONG_MAX on failure.
*/
static unsigned long
drm_gpusvm_range_chunk_size(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier,
struct vm_area_struct *vas,
unsigned long fault_addr,
unsigned long gpuva_start,
unsigned long gpuva_end,
unsigned long check_pages_threshold)
{
unsigned long start, end;
int i = 0;
retry:
for (; i < gpusvm->num_chunks; ++i) {
start = ALIGN_DOWN(fault_addr, gpusvm->chunk_sizes[i]);
end = ALIGN(fault_addr + 1, gpusvm->chunk_sizes[i]);
if (start >= vas->vm_start && end <= vas->vm_end &&
start >= drm_gpusvm_notifier_start(notifier) &&
end <= drm_gpusvm_notifier_end(notifier) &&
start >= gpuva_start && end <= gpuva_end)
break;
}
if (i == gpusvm->num_chunks)
return LONG_MAX;
/*
* If allocation more than page, ensure not to overlap with existing
* ranges.
*/
if (end - start != SZ_4K) {
struct drm_gpusvm_range *range;
range = drm_gpusvm_range_find(notifier, start, end);
if (range) {
++i;
goto retry;
}
/*
* XXX: Only create range on pages CPU has faulted in. Without
* this check, or prefault, on BMG 'xe_exec_system_allocator --r
* process-many-malloc' fails. In the failure case, each process
* mallocs 16k but the CPU VMA is ~128k which results in 64k SVM
* ranges. When migrating the SVM ranges, some processes fail in
* drm_pagemap_migrate_to_devmem with 'migrate.cpages != npages'
* and then upon drm_gpusvm_range_get_pages device pages from
* other processes are collected + faulted in which creates all
* sorts of problems. Unsure exactly how this happening, also
* problem goes away if 'xe_exec_system_allocator --r
* process-many-malloc' mallocs at least 64k at a time.
*/
if (end - start <= check_pages_threshold &&
!drm_gpusvm_check_pages(gpusvm, notifier, start, end)) {
++i;
goto retry;
}
}
return end - start;
}
#ifdef CONFIG_LOCKDEP
/**
* drm_gpusvm_driver_lock_held() - Assert GPU SVM driver lock is held
* @gpusvm: Pointer to the GPU SVM structure.
*
* Ensure driver lock is held.
*/
static void drm_gpusvm_driver_lock_held(struct drm_gpusvm *gpusvm)
{
if ((gpusvm)->lock_dep_map)
lockdep_assert(lock_is_held_type((gpusvm)->lock_dep_map, 0));
}
#else
static void drm_gpusvm_driver_lock_held(struct drm_gpusvm *gpusvm)
{
}
#endif
/**
* drm_gpusvm_find_vma_start() - Find start address for first VMA in range
* @gpusvm: Pointer to the GPU SVM structure
* @start: The inclusive start user address.
* @end: The exclusive end user address.
*
* Returns: The start address of first VMA within the provided range,
* ULONG_MAX otherwise. Assumes start_addr < end_addr.
*/
unsigned long
drm_gpusvm_find_vma_start(struct drm_gpusvm *gpusvm,
unsigned long start,
unsigned long end)
{
struct mm_struct *mm = gpusvm->mm;
struct vm_area_struct *vma;
unsigned long addr = ULONG_MAX;
if (!mmget_not_zero(mm))
return addr;
mmap_read_lock(mm);
vma = find_vma_intersection(mm, start, end);
if (vma)
addr = vma->vm_start;
mmap_read_unlock(mm);
mmput(mm);
return addr;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_find_vma_start);
/**
* drm_gpusvm_range_find_or_insert() - Find or insert GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @fault_addr: Fault address
* @gpuva_start: Start address of GPUVA which mirrors CPU
* @gpuva_end: End address of GPUVA which mirrors CPU
* @ctx: GPU SVM context
*
* This function finds or inserts a newly allocated a GPU SVM range based on the
* fault address. Caller must hold a lock to protect range lookup and insertion.
*
* Return: Pointer to the GPU SVM range on success, ERR_PTR() on failure.
*/
struct drm_gpusvm_range *
drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
unsigned long fault_addr,
unsigned long gpuva_start,
unsigned long gpuva_end,
const struct drm_gpusvm_ctx *ctx)
{
struct drm_gpusvm_notifier *notifier;
struct drm_gpusvm_range *range;
struct mm_struct *mm = gpusvm->mm;
struct vm_area_struct *vas;
bool notifier_alloc = false;
unsigned long chunk_size;
int err;
bool migrate_devmem;
drm_gpusvm_driver_lock_held(gpusvm);
if (fault_addr < gpusvm->mm_start ||
fault_addr > gpusvm->mm_start + gpusvm->mm_range)
return ERR_PTR(-EINVAL);
if (!mmget_not_zero(mm))
return ERR_PTR(-EFAULT);
notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr);
if (!notifier) {
notifier = drm_gpusvm_notifier_alloc(gpusvm, fault_addr);
if (IS_ERR(notifier)) {
err = PTR_ERR(notifier);
goto err_mmunlock;
}
notifier_alloc = true;
err = mmu_interval_notifier_insert(¬ifier->notifier,
mm,
drm_gpusvm_notifier_start(notifier),
drm_gpusvm_notifier_size(notifier),
&drm_gpusvm_notifier_ops);
if (err)
goto err_notifier;
}
mmap_read_lock(mm);
vas = vma_lookup(mm, fault_addr);
if (!vas) {
err = -ENOENT;
goto err_notifier_remove;
}
if (!ctx->read_only && !(vas->vm_flags & VM_WRITE)) {
err = -EPERM;
goto err_notifier_remove;
}
range = drm_gpusvm_range_find(notifier, fault_addr, fault_addr + 1);
if (range)
goto out_mmunlock;
/*
* XXX: Short-circuiting migration based on migrate_vma_* current
* limitations. If/when migrate_vma_* add more support, this logic will
* have to change.
*/
migrate_devmem = ctx->devmem_possible &&
vma_is_anonymous(vas) && !is_vm_hugetlb_page(vas);
chunk_size = drm_gpusvm_range_chunk_size(gpusvm, notifier, vas,
fault_addr, gpuva_start,
gpuva_end,
ctx->check_pages_threshold);
if (chunk_size == LONG_MAX) {
err = -EINVAL;
goto err_notifier_remove;
}
range = drm_gpusvm_range_alloc(gpusvm, notifier, fault_addr, chunk_size,
migrate_devmem);
if (IS_ERR(range)) {
err = PTR_ERR(range);
goto err_notifier_remove;
}
drm_gpusvm_range_insert(notifier, range);
if (notifier_alloc)
drm_gpusvm_notifier_insert(gpusvm, notifier);
out_mmunlock:
mmap_read_unlock(mm);
mmput(mm);
return range;
err_notifier_remove:
mmap_read_unlock(mm);
if (notifier_alloc)
mmu_interval_notifier_remove(¬ifier->notifier);
err_notifier:
if (notifier_alloc)
drm_gpusvm_notifier_free(gpusvm, notifier);
err_mmunlock:
mmput(mm);
return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_find_or_insert);
/**
* __drm_gpusvm_range_unmap_pages() - Unmap pages associated with a GPU SVM range (internal)
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
* @npages: Number of pages to unmap
*
* This function unmap pages associated with a GPU SVM range. Assumes and
* asserts correct locking is in place when called.
*/
static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range,
unsigned long npages)
{
unsigned long i, j;
struct drm_pagemap *dpagemap = range->dpagemap;
struct device *dev = gpusvm->drm->dev;
lockdep_assert_held(&gpusvm->notifier_lock);
if (range->flags.has_dma_mapping) {
struct drm_gpusvm_range_flags flags = {
.__flags = range->flags.__flags,
};
for (i = 0, j = 0; i < npages; j++) {
struct drm_pagemap_device_addr *addr = &range->dma_addr[j];
if (addr->proto == DRM_INTERCONNECT_SYSTEM)
dma_unmap_page(dev,
addr->addr,
PAGE_SIZE << addr->order,
addr->dir);
else if (dpagemap && dpagemap->ops->device_unmap)
dpagemap->ops->device_unmap(dpagemap,
dev, *addr);
i += 1 << addr->order;
}
/* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */
flags.has_devmem_pages = false;
flags.has_dma_mapping = false;
WRITE_ONCE(range->flags.__flags, flags.__flags);
range->dpagemap = NULL;
}
}
/**
* drm_gpusvm_range_free_pages() - Free pages associated with a GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
*
* This function frees the dma address array associated with a GPU SVM range.
*/
static void drm_gpusvm_range_free_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
lockdep_assert_held(&gpusvm->notifier_lock);
if (range->dma_addr) {
kvfree(range->dma_addr);
range->dma_addr = NULL;
}
}
/**
* drm_gpusvm_range_remove() - Remove GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range to be removed
*
* This function removes the specified GPU SVM range and also removes the parent
* GPU SVM notifier if no more ranges remain in the notifier. The caller must
* hold a lock to protect range and notifier removal.
*/
void drm_gpusvm_range_remove(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
drm_gpusvm_range_end(range));
struct drm_gpusvm_notifier *notifier;
drm_gpusvm_driver_lock_held(gpusvm);
notifier = drm_gpusvm_notifier_find(gpusvm,
drm_gpusvm_range_start(range));
if (WARN_ON_ONCE(!notifier))
return;
drm_gpusvm_notifier_lock(gpusvm);
__drm_gpusvm_range_unmap_pages(gpusvm, range, npages);
drm_gpusvm_range_free_pages(gpusvm, range);
__drm_gpusvm_range_remove(notifier, range);
drm_gpusvm_notifier_unlock(gpusvm);
drm_gpusvm_range_put(range);
if (RB_EMPTY_ROOT(¬ifier->root.rb_root)) {
if (!notifier->flags.removed)
mmu_interval_notifier_remove(¬ifier->notifier);
drm_gpusvm_notifier_remove(gpusvm, notifier);
drm_gpusvm_notifier_free(gpusvm, notifier);
}
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_remove);
/**
* drm_gpusvm_range_get() - Get a reference to GPU SVM range
* @range: Pointer to the GPU SVM range
*
* This function increments the reference count of the specified GPU SVM range.
*
* Return: Pointer to the GPU SVM range.
*/
struct drm_gpusvm_range *
drm_gpusvm_range_get(struct drm_gpusvm_range *range)
{
kref_get(&range->refcount);
return range;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_get);
/**
* drm_gpusvm_range_destroy() - Destroy GPU SVM range
* @refcount: Pointer to the reference counter embedded in the GPU SVM range
*
* This function destroys the specified GPU SVM range when its reference count
* reaches zero. If a custom range-free function is provided, it is invoked to
* free the range; otherwise, the range is deallocated using kfree().
*/
static void drm_gpusvm_range_destroy(struct kref *refcount)
{
struct drm_gpusvm_range *range =
container_of(refcount, struct drm_gpusvm_range, refcount);
struct drm_gpusvm *gpusvm = range->gpusvm;
if (gpusvm->ops->range_free)
gpusvm->ops->range_free(range);
else
kfree(range);
}
/**
* drm_gpusvm_range_put() - Put a reference to GPU SVM range
* @range: Pointer to the GPU SVM range
*
* This function decrements the reference count of the specified GPU SVM range
* and frees it when the count reaches zero.
*/
void drm_gpusvm_range_put(struct drm_gpusvm_range *range)
{
kref_put(&range->refcount, drm_gpusvm_range_destroy);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_put);
/**
* drm_gpusvm_range_pages_valid() - GPU SVM range pages valid
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
*
* This function determines if a GPU SVM range pages are valid. Expected be
* called holding gpusvm->notifier_lock and as the last step before committing a
* GPU binding. This is akin to a notifier seqno check in the HMM documentation
* but due to wider notifiers (i.e., notifiers which span multiple ranges) this
* function is required for finer grained checking (i.e., per range) if pages
* are valid.
*
* Return: True if GPU SVM range has valid pages, False otherwise
*/
bool drm_gpusvm_range_pages_valid(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
lockdep_assert_held(&gpusvm->notifier_lock);
return range->flags.has_devmem_pages || range->flags.has_dma_mapping;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_pages_valid);
/**
* drm_gpusvm_range_pages_valid_unlocked() - GPU SVM range pages valid unlocked
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
*
* This function determines if a GPU SVM range pages are valid. Expected be
* called without holding gpusvm->notifier_lock.
*
* Return: True if GPU SVM range has valid pages, False otherwise
*/
static bool
drm_gpusvm_range_pages_valid_unlocked(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
bool pages_valid;
if (!range->dma_addr)
return false;
drm_gpusvm_notifier_lock(gpusvm);
pages_valid = drm_gpusvm_range_pages_valid(gpusvm, range);
if (!pages_valid)
drm_gpusvm_range_free_pages(gpusvm, range);
drm_gpusvm_notifier_unlock(gpusvm);
return pages_valid;
}
/**
* drm_gpusvm_range_get_pages() - Get pages for a GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
* @ctx: GPU SVM context
*
* This function gets pages for a GPU SVM range and ensures they are mapped for
* DMA access.
*
* Return: 0 on success, negative error code on failure.
*/
int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range,
const struct drm_gpusvm_ctx *ctx)
{
struct mmu_interval_notifier *notifier = &range->notifier->notifier;
struct hmm_range hmm_range = {
.default_flags = HMM_PFN_REQ_FAULT | (ctx->read_only ? 0 :
HMM_PFN_REQ_WRITE),
.notifier = notifier,
.start = drm_gpusvm_range_start(range),
.end = drm_gpusvm_range_end(range),
.dev_private_owner = gpusvm->device_private_page_owner,
};
struct mm_struct *mm = gpusvm->mm;
void *zdd;
unsigned long timeout =
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
unsigned long i, j;
unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
drm_gpusvm_range_end(range));
unsigned long num_dma_mapped;
unsigned int order = 0;
unsigned long *pfns;
int err = 0;
struct dev_pagemap *pagemap;
struct drm_pagemap *dpagemap;
struct drm_gpusvm_range_flags flags;
retry:
hmm_range.notifier_seq = mmu_interval_read_begin(notifier);
if (drm_gpusvm_range_pages_valid_unlocked(gpusvm, range))
goto set_seqno;
pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
if (!pfns)
return -ENOMEM;
if (!mmget_not_zero(mm)) {
err = -EFAULT;
goto err_free;
}
hmm_range.hmm_pfns = pfns;
while (true) {
mmap_read_lock(mm);
err = hmm_range_fault(&hmm_range);
mmap_read_unlock(mm);
if (err == -EBUSY) {
if (time_after(jiffies, timeout))
break;
hmm_range.notifier_seq =
mmu_interval_read_begin(notifier);
continue;
}
break;
}
mmput(mm);
if (err)
goto err_free;
map_pages:
/*
* Perform all dma mappings under the notifier lock to not
* access freed pages. A notifier will either block on
* the notifier lock or unmap dma.
*/
drm_gpusvm_notifier_lock(gpusvm);
flags.__flags = range->flags.__flags;
if (flags.unmapped) {
drm_gpusvm_notifier_unlock(gpusvm);
err = -EFAULT;
goto err_free;
}
if (mmu_interval_read_retry(notifier, hmm_range.notifier_seq)) {
drm_gpusvm_notifier_unlock(gpusvm);
kvfree(pfns);
goto retry;
}
if (!range->dma_addr) {
/* Unlock and restart mapping to allocate memory. */
drm_gpusvm_notifier_unlock(gpusvm);
range->dma_addr = kvmalloc_array(npages,
sizeof(*range->dma_addr),
GFP_KERNEL);
if (!range->dma_addr) {
err = -ENOMEM;
goto err_free;
}
goto map_pages;
}
zdd = NULL;
pagemap = NULL;
num_dma_mapped = 0;
for (i = 0, j = 0; i < npages; ++j) {
struct page *page = hmm_pfn_to_page(pfns[i]);
order = hmm_pfn_to_map_order(pfns[i]);
if (is_device_private_page(page) ||
is_device_coherent_page(page)) {
if (zdd != page->zone_device_data && i > 0) {
err = -EOPNOTSUPP;
goto err_unmap;
}
zdd = page->zone_device_data;
if (pagemap != page_pgmap(page)) {
if (i > 0) {
err = -EOPNOTSUPP;
goto err_unmap;
}
pagemap = page_pgmap(page);
dpagemap = drm_pagemap_page_to_dpagemap(page);
if (drm_WARN_ON(gpusvm->drm, !dpagemap)) {
/*
* Raced. This is not supposed to happen
* since hmm_range_fault() should've migrated
* this page to system.
*/
err = -EAGAIN;
goto err_unmap;
}
}
range->dma_addr[j] =
dpagemap->ops->device_map(dpagemap,
gpusvm->drm->dev,
page, order,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(gpusvm->drm->dev,
range->dma_addr[j].addr)) {
err = -EFAULT;
goto err_unmap;
}
} else {
dma_addr_t addr;
if (is_zone_device_page(page) || pagemap) {
err = -EOPNOTSUPP;
goto err_unmap;
}
if (ctx->devmem_only) {
err = -EFAULT;
goto err_unmap;
}
addr = dma_map_page(gpusvm->drm->dev,
page, 0,
PAGE_SIZE << order,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(gpusvm->drm->dev, addr)) {
err = -EFAULT;
goto err_unmap;
}
range->dma_addr[j] = drm_pagemap_device_addr_encode
(addr, DRM_INTERCONNECT_SYSTEM, order,
DMA_BIDIRECTIONAL);
}
i += 1 << order;
num_dma_mapped = i;
flags.has_dma_mapping = true;
}
if (pagemap) {
flags.has_devmem_pages = true;
range->dpagemap = dpagemap;
}
/* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */
WRITE_ONCE(range->flags.__flags, flags.__flags);
drm_gpusvm_notifier_unlock(gpusvm);
kvfree(pfns);
set_seqno:
range->notifier_seq = hmm_range.notifier_seq;
return 0;
err_unmap:
__drm_gpusvm_range_unmap_pages(gpusvm, range, num_dma_mapped);
drm_gpusvm_notifier_unlock(gpusvm);
err_free:
kvfree(pfns);
if (err == -EAGAIN)
goto retry;
return err;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_get_pages);
/**
* drm_gpusvm_range_unmap_pages() - Unmap pages associated with a GPU SVM range
* drm_gpusvm_range_evict() - Evict GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range structure
* @ctx: GPU SVM context
*
* This function unmaps pages associated with a GPU SVM range. If @in_notifier
* is set, it is assumed that gpusvm->notifier_lock is held in write mode; if it
* is clear, it acquires gpusvm->notifier_lock in read mode. Must be called on
* each GPU SVM range attached to notifier in gpusvm->ops->invalidate for IOMMU
* security model.
*/
void drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range,
const struct drm_gpusvm_ctx *ctx)
{
unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
drm_gpusvm_range_end(range));
if (ctx->in_notifier)
lockdep_assert_held_write(&gpusvm->notifier_lock);
else
drm_gpusvm_notifier_lock(gpusvm);
__drm_gpusvm_range_unmap_pages(gpusvm, range, npages);
if (!ctx->in_notifier)
drm_gpusvm_notifier_unlock(gpusvm);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_unmap_pages);
/**
* drm_gpusvm_range_evict() - Evict GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range to be removed
*
* This function evicts the specified GPU SVM range.
*
* Return: 0 on success, a negative error code on failure.
*/
int drm_gpusvm_range_evict(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
struct mmu_interval_notifier *notifier = &range->notifier->notifier;
struct hmm_range hmm_range = {
.default_flags = HMM_PFN_REQ_FAULT,
.notifier = notifier,
.start = drm_gpusvm_range_start(range),
.end = drm_gpusvm_range_end(range),
.dev_private_owner = NULL,
};
unsigned long timeout =
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
unsigned long *pfns;
unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
drm_gpusvm_range_end(range));
int err = 0;
struct mm_struct *mm = gpusvm->mm;
if (!mmget_not_zero(mm))
return -EFAULT;
pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
if (!pfns)
return -ENOMEM;
hmm_range.hmm_pfns = pfns;
while (!time_after(jiffies, timeout)) {
hmm_range.notifier_seq = mmu_interval_read_begin(notifier);
if (time_after(jiffies, timeout)) {
err = -ETIME;
break;
}
mmap_read_lock(mm);
err = hmm_range_fault(&hmm_range);
mmap_read_unlock(mm);
if (err != -EBUSY)
break;
}
kvfree(pfns);
mmput(mm);
return err;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_evict);
/**
* drm_gpusvm_has_mapping() - Check if GPU SVM has mapping for the given address range
* @gpusvm: Pointer to the GPU SVM structure.
* @start: Start address
* @end: End address
*
* Return: True if GPU SVM has mapping, False otherwise
*/
bool drm_gpusvm_has_mapping(struct drm_gpusvm *gpusvm, unsigned long start,
unsigned long end)
{
struct drm_gpusvm_notifier *notifier;
drm_gpusvm_for_each_notifier(notifier, gpusvm, start, end) {
struct drm_gpusvm_range *range = NULL;
drm_gpusvm_for_each_range(range, notifier, start, end)
return true;
}
return false;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_has_mapping);
/**
* drm_gpusvm_range_set_unmapped() - Mark a GPU SVM range as unmapped
* @range: Pointer to the GPU SVM range structure.
* @mmu_range: Pointer to the MMU notifier range structure.
*
* This function marks a GPU SVM range as unmapped and sets the partial_unmap flag
* if the range partially falls within the provided MMU notifier range.
*/
void drm_gpusvm_range_set_unmapped(struct drm_gpusvm_range *range,
const struct mmu_notifier_range *mmu_range)
{
lockdep_assert_held_write(&range->gpusvm->notifier_lock);
range->flags.unmapped = true;
if (drm_gpusvm_range_start(range) < mmu_range->start ||
drm_gpusvm_range_end(range) > mmu_range->end)
range->flags.partial_unmap = true;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_set_unmapped);
MODULE_DESCRIPTION("DRM GPUSVM");
MODULE_LICENSE("GPL");
|