1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 6234 6235 6236 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 6291 6292 6293 6294 6295 6296 6297 6298 6299 6300 6301 6302 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 6325 6326 6327 6328 6329 6330 6331 6332 6333 6334 6335 6336 6337 6338 6339 6340 6341 6342 6343 6344 6345 6346 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 6438 6439 6440 6441 6442 6443 6444 6445 6446 6447 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 6727 6728 6729 6730 6731 6732 6733 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 6791 6792 6793 6794 6795 6796 6797 6798 6799 6800 6801 6802 6803 6804 6805 6806 6807 6808 6809 6810 6811 6812 6813 6814 6815 6816 6817 6818 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 6833 6834 6835 6836 6837 6838 6839 6840 6841 6842 6843 6844 6845 6846 6847 6848 6849 6850 6851 6852 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 6881 6882 6883 6884 6885 6886 6887 6888 6889 6890 6891 6892 6893 6894 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 6967 6968 6969 6970 6971 6972 6973 6974 6975 6976 6977 6978 6979 6980 6981 6982 6983 6984 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 7026 7027 7028 7029 7030 7031 7032 7033 7034 7035 7036 7037 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 7050 7051 7052 7053 7054 7055 7056 7057 7058 7059 7060 7061 7062 7063 7064 7065 7066 7067 7068 7069 7070 7071 7072 7073 7074 7075 7076 7077 7078 7079 7080 7081 7082 7083 7084 7085 7086 7087 7088 7089 7090 7091 7092 7093 7094 7095 7096 7097 7098 7099 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 7116 7117 7118 7119 7120 7121 7122 7123 7124 7125 7126 7127 7128 7129 7130 7131 7132 7133 7134 7135 7136 7137 7138 7139 7140 7141 7142 7143 7144 7145 7146 7147 7148 7149 7150 7151 7152 7153 7154 7155 7156 7157 7158 7159 7160 7161 7162 7163 7164 7165 7166 7167 7168 7169 7170 7171 7172 7173 7174 7175 7176 7177 7178 7179 7180 7181 7182 7183 7184 7185 7186 7187 7188 7189 7190 7191 7192 7193 7194 7195 7196 7197 7198 7199 7200 7201 7202 7203 7204 7205 7206 7207 7208 7209 7210 7211 7212 7213 7214 7215 7216 7217 7218 7219 7220 7221 7222 7223 7224 7225 7226 7227 7228 7229 7230 7231 7232 7233 7234 7235 7236 7237 7238 7239 7240 7241 7242 7243 7244 7245 7246 7247 7248 7249 7250 7251 7252 7253 7254 7255 7256 7257 7258 7259 7260 7261 7262 7263 7264 7265 7266 7267 7268 7269 7270 7271 7272 7273 7274 7275 7276 7277 7278 7279 7280 7281 7282 7283 7284 7285 7286 7287 7288 7289 7290 7291 7292 7293 7294 7295 7296 7297 7298 7299 7300 7301 7302 7303 7304 7305 7306 7307 7308 7309 7310 7311 7312 7313 7314 7315 7316 7317 7318 7319 7320 7321 7322 7323 7324 7325 7326 7327 7328 7329 7330 7331 7332 7333 7334 7335 7336 7337 7338 7339 7340 7341 7342 7343 7344 7345 7346 7347 7348 7349 7350 7351 7352 7353 7354 7355 7356 7357 7358 7359 7360 7361 7362 7363 7364 7365 7366 7367 7368 7369 7370 7371 7372 7373 7374 7375 7376 7377 7378 7379 7380 7381 7382 7383 7384 7385 7386 7387 7388 7389 7390 7391 7392 7393 7394 7395 7396 7397 7398 7399 7400 7401 7402 7403 7404 7405 7406 7407 7408 7409 7410 7411 7412 7413 7414 7415 7416 7417 7418 7419 7420 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 7438 7439 7440 7441 7442 7443 7444 7445 7446 7447 7448 7449 7450 7451 7452 7453 7454 7455 7456 7457 7458 7459 7460 7461 7462 7463 7464 7465 7466 7467 7468 7469 7470 7471 7472 7473 7474 7475 7476 7477 7478 7479 7480 7481 7482 7483 7484 7485 7486 7487 7488 7489 7490 7491 7492 7493 7494 7495 7496 7497 7498 7499 7500 7501 7502 7503 7504 7505 7506 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 7531 7532 7533 7534 7535 7536 7537 7538 7539 7540 7541 7542 7543 7544 7545 7546 7547 7548 7549 7550 7551 7552 7553 7554 7555 7556 7557 7558 7559 7560 7561 7562 7563 7564 7565 7566 7567 7568 7569 7570 7571 7572 7573 7574 7575 7576 7577 7578 7579 7580 7581 7582 7583 7584 7585 7586 7587 7588 7589 7590 7591 7592 7593 7594 7595 7596 7597 7598 7599 7600 7601 7602 7603 7604 7605 7606 7607 7608 7609 7610 7611 7612 7613 7614 7615 7616 7617 7618 7619 7620 7621 7622 7623 7624 7625 7626 7627 7628 7629 7630 7631 7632 7633 7634 7635 7636 7637 7638 7639 7640 7641 7642 7643 7644 7645 7646 7647 7648 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 7688 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 7704 7705 7706 7707 7708 7709 7710 7711 7712 7713 7714 7715 7716 7717 7718 7719 7720 7721 7722 7723 7724 7725 7726 7727 7728 7729 7730 7731 7732 7733 7734 7735 7736 7737 7738 7739 7740 7741 7742 7743 7744 7745 7746 7747 7748 7749 7750 7751 7752 7753 7754 7755 7756 7757 7758 7759 7760 7761 7762 7763 7764 7765 7766 7767 7768 7769 7770 7771 7772 7773 7774 7775 7776 7777 7778 7779 7780 7781 7782 7783 7784 7785 7786 7787 7788 7789 7790 7791 7792 7793 7794 7795 7796 7797 7798 7799 7800 7801 7802 7803 7804 7805 7806 7807 7808 7809 7810 7811 7812 7813 7814 7815 7816 7817 7818 7819 7820 7821 7822 7823 7824 7825 7826 7827 7828 7829 7830 7831 7832 7833 7834 7835 7836 7837 7838 7839 7840 7841 7842 7843 7844 7845 7846 7847 7848 7849 7850 7851 7852 7853 7854 7855 7856 7857 7858 7859 7860 7861 7862 7863 7864 7865 7866 7867 7868 7869 7870 7871 7872 7873 7874 7875 7876 7877 7878 7879 7880 7881 7882 7883 7884 7885 7886 7887 7888 7889 7890 7891 7892 7893 7894 7895 7896 7897 7898 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 7921 7922 7923 7924 7925 7926 7927 7928 7929 7930 7931 7932 7933 7934 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 7946 7947 7948 7949 7950 7951 7952 7953 7954 7955 7956 7957 7958 7959 7960 7961 7962 7963 7964 7965 7966 7967 7968 7969 7970 7971 7972 7973 7974 7975 7976 7977 7978 7979 7980 7981 7982 7983 7984 7985 7986 7987 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 8007 8008 8009 8010 8011 8012 8013 8014 8015 8016 8017 8018 8019 8020 8021 8022 8023 8024 8025 8026 8027 8028 8029 8030 8031 8032 8033 8034 8035 8036 8037 8038 8039 8040 8041 8042 8043 8044 8045 8046 8047 8048 8049 8050 8051 8052 8053 8054 8055 8056 8057 8058 8059 8060 8061 8062 8063 8064 8065 8066 8067 8068 8069 8070 8071 8072 8073 8074 8075 8076 8077 8078 8079 8080 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094 8095 8096 8097 8098 8099 8100 8101 8102 8103 8104 8105 8106 8107 8108 8109 8110 8111 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 8124 8125 8126 8127 8128 8129 8130 8131 8132 8133 8134 8135 8136 8137 8138 8139 8140 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 8155 8156 8157 8158 8159 8160 8161 8162 8163 8164 8165 8166 8167 8168 8169 8170 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 8182 8183 8184 8185 8186 8187 8188 8189 8190 8191 8192 8193 8194 8195 8196 8197 8198 8199 8200 8201 8202 8203 8204 8205 8206 8207 8208 8209 8210 8211 8212 8213 8214 8215 8216 8217 8218 8219 8220 8221 8222 8223 8224 8225 8226 8227 8228 8229 8230 8231 8232 8233 8234 8235 8236 8237 8238 8239 8240 8241 8242 8243 8244 8245 8246 8247 8248 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 8265 8266 8267 8268 8269 8270 8271 8272 8273 8274 8275 8276 8277 8278 8279 8280 8281 8282 8283 8284 8285 8286 8287 8288 8289 8290 8291 8292 8293 8294 8295 8296 8297 8298 8299 8300 8301 8302 8303 8304 8305 8306 8307 8308 8309 8310 8311 8312 8313 8314 8315 8316 8317 8318 8319 8320 8321 8322 8323 8324 8325 8326 8327 8328 8329 8330 8331 8332 8333 8334 8335 8336 8337 8338 8339 8340 8341 8342 8343 8344 8345 8346 8347 8348 8349 8350 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 8366 8367 8368 8369 8370 8371 8372 8373 8374 8375 8376 8377 8378 8379 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 8403 8404 8405 8406 8407 8408 8409 8410 8411 8412 8413 8414 8415 8416 8417 8418 8419 8420 8421 8422 8423 8424 8425 8426 8427 8428 8429 8430 8431 8432 8433 8434 8435 8436 8437 8438 8439 8440 8441 8442 8443 8444 8445 8446 8447 8448 8449 8450 8451 8452 8453 8454 8455 8456 8457 8458 8459 8460 8461 8462 8463 8464 8465 8466 8467 8468 8469 8470 8471 8472 8473 8474 8475 8476 8477 8478 8479 8480 8481 8482 8483 8484 8485 8486 8487 8488 8489 8490 8491 8492 8493 8494 8495 8496 8497 8498 8499 8500 8501 8502 8503 8504 8505 8506 8507 8508 8509 8510 8511 8512 8513 8514 8515 8516 8517 8518 8519 8520 8521 8522 8523 8524 8525 8526 8527 8528 8529 8530 8531 8532 8533 8534 8535 8536 8537 8538 8539 8540 8541 8542 8543 8544 8545 8546 8547 8548 8549 8550 8551 8552 8553 8554 8555 8556 8557 8558 8559 8560 8561 8562 8563 8564 8565 8566 8567 8568 8569 8570 8571 8572 8573 8574 8575 8576 8577 8578 8579 8580 8581 8582 8583 8584 8585 8586 8587 8588 8589 8590 8591 8592 8593 8594 8595 8596 8597 8598 8599 8600 8601 8602 8603 8604 8605 8606 8607 8608 8609 8610 8611 8612 8613 8614 8615 8616 8617 8618 8619 8620 8621 8622 8623 8624 8625 8626 8627 8628 8629 8630 8631 8632 8633 8634 8635 8636 8637 8638 8639 8640 8641 8642 8643 8644 8645 8646 8647 8648 8649 8650 8651 8652 8653 8654 8655 8656 8657 8658 8659 8660 8661 8662 8663 8664 8665 8666 8667 8668 8669 8670 8671 8672 8673 8674 8675 8676 8677 8678 8679 8680 8681 8682 8683 8684 8685 8686 8687 8688 8689 8690 8691 8692 8693 8694 8695 8696 8697 8698 8699 8700 8701 8702 8703 8704 8705 8706 8707 8708 8709 8710 8711 8712 8713 8714 8715 8716 8717 8718 8719 8720 8721 8722 8723 8724 8725 8726 8727 8728 8729 8730 8731 8732 8733 8734 8735 8736 8737 8738 8739 8740 8741 8742 8743 8744 8745 8746 8747 8748 8749 8750 8751 8752 8753 8754 8755 8756 8757 8758 8759 8760 8761 8762 8763 8764 8765 8766 8767 8768 8769 8770 8771 8772 8773 8774 8775 8776 8777 8778 8779 8780 8781 8782 8783 8784 8785 8786 8787 8788 8789 8790 8791 8792 8793 8794 8795 8796 8797 8798 8799 8800 8801 8802 8803 8804 8805 8806 8807 8808 8809 8810 8811 8812 8813 8814 8815 8816 8817 8818 8819 8820 8821 8822 8823 8824 8825 8826 8827 8828 8829 8830 8831 8832 8833 8834 8835 8836 8837 8838 8839 8840 8841 8842 8843 8844 8845 8846 8847 8848 8849 8850 8851 8852 8853 8854 8855 8856 8857 8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 8872 8873 8874 8875 8876 8877 8878 8879 8880 8881 8882 8883 8884 8885 8886 8887 8888 8889 8890 8891 8892 8893 8894 8895 8896 8897 8898 8899 8900 8901 8902 8903 8904 8905 8906 8907 8908 8909 8910 8911 8912 8913 8914 8915 8916 8917 8918 8919 8920 8921 8922 8923 8924 8925 8926 8927 8928 8929 8930 8931 8932 8933 8934 8935 8936 8937 8938 8939 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 8955 8956 8957 8958 8959 8960 8961 8962 8963 8964 8965 8966 8967 8968 8969 8970 8971 8972 8973 8974 8975 8976 8977 8978 8979 8980 8981 8982 8983 8984 8985 8986 8987 8988 8989 8990 8991 8992 8993 8994 8995 8996 8997 8998 8999 9000 9001 9002 9003 9004 9005 9006 9007 9008 9009 9010 9011 9012 9013 9014 9015 9016 9017 9018 9019 9020 9021 9022 9023 9024 9025 9026 9027 9028 9029 9030 9031 9032 9033 9034 9035 9036 9037 9038 9039 9040 9041 9042 9043 9044 9045 9046 9047 9048 9049 9050 9051 9052 9053 9054 9055 9056 9057 9058 9059 9060 9061 9062 9063 9064 9065 9066 9067 9068 9069 9070 9071 9072 9073 9074 9075 9076 9077 9078 9079 9080 9081 9082 9083 9084 9085 9086 9087 9088 9089 9090 9091 9092 9093 9094 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 9113 9114 9115 9116 9117 9118 9119 9120 9121 9122 9123 9124 9125 9126 9127 9128 9129 9130 9131 9132 9133 9134 9135 9136 9137 9138 9139 9140 9141 9142 9143 9144 9145 9146 9147 9148 9149 9150 9151 9152 9153 9154 9155 9156 9157 9158 9159 9160 9161 9162 9163 9164 9165 9166 9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 9177 9178 9179 9180 9181 9182 9183 9184 9185 9186 9187 9188 9189 9190 9191 9192 9193 9194 9195 9196 9197 9198 9199 9200 9201 9202 9203 9204 9205 9206 9207 9208 9209 9210 9211 9212 9213 9214 9215 9216 9217 9218 9219 9220 9221 9222 9223 9224 9225 9226 9227 9228 9229 9230 9231 9232 9233 9234 9235 9236 9237 9238 9239 9240 9241 9242 9243 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 9264 9265 9266 9267 9268 9269 9270 9271 9272 9273 9274 9275 9276 9277 9278 9279 9280 9281 9282 9283 9284 9285 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 9317 9318 9319 9320 9321 9322 9323 9324 9325 9326 9327 9328 9329 9330 9331 9332 9333 9334 9335 9336 9337 9338 9339 9340 9341 9342 9343 9344 9345 9346 9347 9348 9349 9350 9351 9352 9353 9354 9355 9356 9357 9358 9359 9360 9361 9362 9363 9364 9365 9366 9367 9368 9369 9370 9371 9372 9373 9374 9375 9376 9377 9378 9379 9380 9381 9382 9383 9384 9385 9386 9387 9388 9389 9390 9391 9392 9393 9394 9395 9396 9397 9398 9399 9400 9401 9402 9403 9404 9405 9406 9407 9408 9409 9410 9411 9412 9413 9414 9415 9416 9417 9418 9419 9420 9421 9422 9423 9424 9425 9426 9427 9428 9429 9430 9431 9432 9433 9434 9435 9436 9437 9438 9439 9440 9441 9442 9443 9444 9445 9446 9447 9448 9449 9450 9451 9452 9453 9454 9455 9456 9457 9458 9459 9460 9461 9462 9463 9464 9465 9466 9467 9468 9469 9470 9471 9472 9473 9474 9475 9476 9477 9478 9479 9480 9481 9482 9483 9484 9485 9486 9487 9488 9489 9490 9491 9492 9493 9494 9495 9496 9497 9498 9499 9500 9501 9502 9503 9504 9505 9506 9507 9508 9509 9510 9511 9512 9513 9514 9515 9516 9517 9518 9519 9520 9521 9522 9523 9524 9525 9526 9527 9528 9529 9530 9531 9532 9533 9534 9535 9536 9537 9538 9539 9540 9541 9542 9543 9544 9545 9546 9547 9548 9549 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9561 9562 9563 9564 9565 9566 9567 9568 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 9583 9584 9585 9586 9587 9588 9589 9590 9591 9592 9593 9594 9595 9596 9597 9598 9599 9600 9601 9602 9603 9604 9605 9606 9607 9608 9609 9610 9611 9612 9613 9614 9615 9616 9617 9618 9619 9620 9621 9622 9623 9624 9625 9626 9627 9628 9629 9630 9631 9632 9633 9634 9635 9636 9637 9638 9639 9640 9641 9642 9643 9644 9645 9646 9647 9648 9649 9650 9651 9652 9653 9654 9655 9656 9657 9658 9659 9660 9661 9662 9663 9664 9665 9666 9667 9668 9669 9670 9671 9672 9673 9674 9675 9676 9677 9678 9679 9680 9681 9682 9683 9684 9685 9686 9687 9688 9689 9690 9691 9692 9693 9694 9695 9696 9697 9698 9699 9700 9701 9702 9703 9704 9705 9706 9707 9708 9709 9710 9711 9712 9713 9714 9715 9716 9717 9718 9719 9720 9721 9722 9723 9724 9725 9726 9727 9728 9729 9730 9731 9732 9733 9734 9735 9736 9737 9738 9739 9740 9741 9742 9743 9744 9745 9746 9747 9748 9749 9750 9751 9752 9753 9754 9755 9756 9757 9758 9759 9760 9761 9762 9763 9764 9765 9766 9767 9768 9769 9770 9771 9772 9773 9774 9775 9776 9777 9778 9779 9780 9781 9782 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9831 9832 9833 9834 9835 9836 9837 9838 9839 9840 9841 9842 9843 9844 9845 9846 9847 9848 9849 9850 9851 9852 9853 9854 9855 9856 9857 9858 9859 9860 9861 9862 9863 9864 9865 9866 9867 9868 9869 9870 9871 9872 9873 9874 9875 9876 9877 9878 9879 9880 9881 9882 9883 9884 9885 9886 9887 9888 9889 9890 9891 9892 9893 9894 9895 9896 9897 9898 9899 9900 9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913 9914 9915 9916 9917 9918 9919 9920 9921 9922 9923 9924 9925 9926 9927 9928 9929 9930 9931 9932 9933 9934 9935 9936 9937 9938 9939 9940 9941 9942 9943 9944 9945 9946 9947 9948 9949 9950 9951 9952 9953 9954 9955 9956 9957 9958 9959 9960 9961 9962 9963 9964 9965 9966 9967 9968 9969 9970 9971 9972 9973 9974 9975 9976 9977 9978 9979 9980 9981 9982 9983 9984 9985 9986 9987 9988 9989 9990 9991 9992 9993 9994 9995 9996 9997 9998 9999 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 10010 10011 10012 10013 10014 10015 10016 10017 10018 10019 10020 10021 10022 10023 10024 10025 10026 10027 10028 10029 10030 10031 10032 10033 10034 10035 10036 10037 10038 10039 10040 10041 10042 10043 10044 10045 10046 10047 10048 10049 10050 10051 10052 10053 10054 10055 10056 10057 10058 10059 10060 10061 10062 10063 10064 10065 10066 10067 10068 10069 10070 10071 10072 10073 10074 10075 10076 10077 10078 10079 10080 10081 10082 10083 10084 10085 10086 10087 10088 10089 10090 10091 10092 10093 10094 10095 10096 10097 10098 10099 10100 10101 10102 10103 10104 10105 10106 10107 10108 10109 10110 10111 10112 10113 10114 10115 10116 10117 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 10133 10134 10135 10136 10137 10138 10139 10140 10141 10142 10143 10144 10145 10146 10147 10148 10149 10150 10151 10152 10153 10154 10155 10156 10157 10158 10159 10160 10161 10162 10163 10164 10165 10166 10167 10168 10169 10170 10171 10172 10173 10174 10175 10176 10177 10178 10179 10180 10181 10182 10183 10184 10185 10186 10187 10188 10189 10190 10191 10192 10193 10194 10195 10196 10197 10198 10199 10200 10201 10202 10203 10204 10205 10206 10207 10208 10209 10210 10211 10212 10213 10214 10215 10216 10217 10218 10219 10220 10221 10222 10223 10224 10225 10226 10227 10228 10229 10230 10231 10232 10233 10234 10235 10236 10237 10238 10239 10240 10241 10242 10243 10244 10245 10246 10247 10248 10249 10250 10251 10252 10253 10254 10255 10256 10257 10258 10259 10260 10261 10262 10263 10264 10265 10266 10267 10268 10269 10270 10271 10272 10273 10274 10275 10276 10277 10278 10279 10280 10281 10282 10283 10284 10285 10286 10287 10288 10289 10290 10291 10292 10293 10294 10295 10296 10297 10298 10299 10300 10301 10302 10303 10304 10305 10306 10307 10308 10309 10310 10311 10312 10313 10314 10315 10316 10317 10318 10319 10320 10321 10322 10323 10324 10325 10326 10327 10328 10329 10330 10331 10332 10333 10334 10335 10336 10337 10338 10339 10340 10341 10342 10343 10344 10345 10346 10347 10348 10349 10350 10351 10352 10353 10354 10355 10356 10357 10358 10359 10360 10361 10362 10363 10364 10365 10366 10367 10368 10369 10370 10371 10372 10373 10374 10375 10376 10377 10378 10379 10380 10381 10382 10383 10384 10385 10386 10387 10388 10389 10390 10391 10392 10393 10394 10395 10396 10397 10398 10399 10400 10401 10402 10403 10404 10405 10406 10407 10408 10409 10410 10411 10412 10413 10414 10415 10416 10417 10418 10419 10420 10421 10422 10423 10424 10425 10426 10427 10428 10429 10430 10431 10432 10433 10434 10435 10436 10437 10438 10439 10440 10441 10442 10443 10444 10445 10446 10447 10448 10449 10450 10451 10452 10453 10454 10455 10456 10457 10458 10459 10460 10461 10462 10463 10464 10465 10466 10467 10468 10469 10470 10471 10472 10473 10474 10475 10476 10477 10478 10479 10480 10481 10482 10483 10484 10485 10486 10487 10488 10489 10490 10491 10492 10493 10494 10495 10496 10497 10498 10499 10500 10501 10502 10503 10504 10505 10506 10507 10508 10509 10510 10511 10512 10513 10514 10515 10516 10517 10518 10519 10520 10521 10522 10523 10524 10525 10526 10527 10528 10529 10530 10531 10532 10533 10534 10535 10536 10537 10538 10539 10540 10541 10542 10543 10544 10545 10546 10547 10548 10549 10550 10551 10552 10553 10554 10555 10556 10557 10558 10559 10560 10561 10562 10563 10564 10565 10566 10567 10568 10569 10570 10571 10572 10573 10574 10575 10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 10595 10596 10597 10598 10599 10600 10601 10602 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 10660 10661 10662 10663 10664 10665 10666 10667 10668 10669 10670 10671 10672 10673 10674 10675 10676 10677 10678 10679 10680 10681 10682 10683 10684 10685 10686 10687 10688 10689 10690 10691 10692 10693 10694 10695 10696 10697 10698 10699 10700 10701 10702 10703 10704 10705 10706 10707 10708 10709 10710 10711 10712 10713 10714 10715 10716 10717 10718 10719 10720 10721 10722 10723 10724 10725 10726 10727 10728 10729 10730 10731 10732 10733 10734 10735 10736 10737 10738 10739 10740 10741 10742 10743 10744 10745 10746 10747 10748 10749 10750 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 10784 10785 10786 10787 10788 10789 10790 10791 10792 10793 10794 10795 10796 10797 10798 10799 10800 10801 10802 10803 10804 10805 10806 10807 10808 10809 10810 10811 10812 10813 10814 10815 10816 10817 10818 10819 10820 10821 10822 10823 10824 10825 10826 10827 10828 10829 10830 10831 10832 10833 10834 10835 10836 10837 10838 10839 10840 10841 10842 10843 10844 10845 10846 10847 10848 10849 10850 10851 10852 10853 10854 10855 10856 10857 10858 10859 10860 10861 10862 10863 10864 10865 10866 10867 10868 10869 10870 10871 10872 10873 10874 10875 10876 10877 10878 10879 10880 10881 10882 10883 10884 10885 10886 10887 10888 10889 10890 10891 10892 10893 10894 10895 10896 10897 10898 10899 10900 10901 10902 10903 10904 10905 10906 10907 10908 10909 10910 10911 10912 10913 10914 10915 10916 10917 10918 10919 10920 10921 10922 10923 10924 10925 10926 10927 10928 10929 10930 10931 10932 10933 10934 10935 10936 10937 10938 10939 10940 10941 10942 10943 10944 10945 10946 10947 10948 10949 10950 10951 10952 10953 10954 10955 10956 10957 10958 10959 10960 10961 10962 10963 10964 10965 10966 10967 10968 10969 10970 10971 10972 10973 10974 10975 10976 10977 10978 10979 10980 10981 10982 10983 10984 10985 10986 10987 10988 10989 10990 10991 10992 10993 10994 10995 10996 10997 10998 10999 11000 11001 11002 11003 11004 11005 11006 11007 11008 11009 11010 11011 11012 11013 11014 11015 11016 11017 11018 11019 11020 11021 11022 11023 11024 11025 11026 11027 11028 11029 11030 11031 11032 11033 11034 11035 11036 11037 11038 11039 11040 11041 11042 11043 11044 11045 11046 11047 11048 11049 11050 11051 11052 11053 11054 11055 11056 11057 11058 11059 11060 11061 11062 11063 11064 11065 11066 11067 11068 11069 11070 11071 11072 11073 11074 11075 11076 11077 11078 11079 11080 11081 11082 11083 11084 11085 11086 11087 11088 11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 11100 11101 11102 11103 11104 11105 11106 11107 11108 11109 11110 11111 11112 11113 11114 11115 11116 11117 11118 11119 11120 11121 11122 11123 11124 11125 11126 11127 11128 11129 11130 11131 11132 11133 11134 11135 11136 11137 11138 11139 11140 11141 11142 11143 11144 11145 11146 11147 11148 11149 11150 11151 11152 11153 11154 11155 11156 11157 11158 11159 11160 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 11175 11176 11177 11178 11179 11180 11181 11182 11183 11184 11185 11186 11187 11188 11189 11190 11191 11192 11193 11194 11195 11196 11197 11198 11199 11200 11201 11202 11203 11204 11205 11206 11207 11208 11209 11210 11211 11212 11213 11214 11215 11216 11217 11218 11219 11220 11221 11222 11223 11224 11225 11226 11227 11228 11229 11230 11231 11232 11233 11234 11235 11236 11237 11238 11239 11240 11241 11242 11243 11244 11245 11246 11247 11248 11249 11250 11251 11252 11253 11254 11255 11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 11269 11270 11271 11272 11273 11274 11275 11276 11277 11278 11279 11280 11281 11282 11283 11284 11285 11286 11287 11288 11289 11290 11291 11292 11293 11294 11295 11296 11297 11298 11299 11300 11301 11302 11303 11304 11305 11306 11307 11308 11309 11310 11311 11312 11313 11314 11315 11316 11317 11318 11319 11320 11321 11322 11323 11324 11325 11326 11327 11328 11329 11330 11331 11332 11333 11334 11335 11336 11337 11338 11339 11340 11341 11342 11343 11344 11345 11346 11347 11348 11349 11350 11351 11352 11353 11354 11355 11356 11357 11358 11359 11360 11361 11362 11363 11364 11365 11366 11367 11368 11369 11370 11371 11372 11373 11374 11375 11376 11377 11378 11379 11380 11381 11382 11383 11384 11385 11386 11387 11388 11389 11390 11391 11392 11393 11394 11395 11396 11397 11398 11399 11400 11401 11402 11403 11404 11405 11406 11407 11408 11409 11410 11411 11412 11413 11414 11415 11416 11417 11418 11419 11420 11421 11422 11423 11424 11425 11426 11427 11428 11429 11430 11431 11432 11433 11434 11435 11436 11437 11438 11439 11440 11441 11442 11443 11444 11445 11446 11447 11448 11449 11450 11451 11452 11453 11454 11455 11456 11457 11458 11459 11460 11461 11462 11463 11464 11465 11466 11467 11468 11469 11470 11471 11472 11473 11474 11475 11476 11477 11478 11479 11480 11481 11482 11483 11484 11485 11486 11487 11488 11489 11490 11491 11492 11493 11494 11495 11496 11497 11498 11499 11500 11501 11502 11503 11504 11505 11506 11507 11508 11509 11510 11511 11512 11513 11514 11515 11516 11517 11518 11519 11520 11521 11522 11523 11524 11525 11526 11527 11528 11529 11530 11531 11532 11533 11534 11535 11536 11537 11538 11539 11540 11541 11542 11543 11544 11545 11546 11547 11548 11549 11550 11551 11552 11553 11554 11555 11556 11557 11558 11559 11560 11561 11562 11563 11564 11565 11566 11567 11568 11569 11570 11571 11572 11573 11574 11575 11576 11577 11578 11579 11580 11581 11582 11583 11584 11585 11586 11587 11588 11589 11590 11591 11592 11593 11594 11595 11596 11597 11598 11599 11600 11601 11602 11603 11604 11605 11606 11607 11608 11609 11610 11611 11612 11613 11614 11615 11616 11617 11618 11619 11620 11621 11622 11623 11624 11625 11626 11627 11628 11629 11630 11631 11632 11633 11634 11635 11636 11637 11638 11639 11640 11641 11642 11643 11644 11645 11646 11647 11648 11649 11650 11651 11652 11653 11654 11655 11656 11657 11658 11659 11660 11661 11662 11663 11664 11665 11666 11667 11668 11669 11670 11671 11672 11673 11674 11675 11676 11677 11678 11679 11680 11681 11682 11683 11684 11685 11686 11687 11688 11689 11690 11691 11692 11693 11694 11695 11696 11697 11698 11699 11700 11701 11702 11703 11704 11705 11706 11707 11708 11709 11710 11711 11712 11713 11714 11715 11716 11717 11718 11719 11720 11721 11722 11723 11724 11725 11726 11727 11728 11729 11730 11731 11732 11733 11734 11735 11736 11737 11738 11739 11740 11741 11742 11743 11744 11745 11746 11747 11748 11749 11750 11751 11752 11753 11754 11755 11756 11757 11758 11759 11760 11761 11762 11763 11764 11765 11766 11767 11768 11769 11770 11771 11772 11773 11774 11775 11776 11777 11778 11779 11780 11781 11782 11783 11784 11785 11786 11787 11788 11789 11790 11791 11792 11793 11794 11795 11796 11797 11798 11799 11800 11801 11802 11803 11804 11805 11806 11807 11808 11809 11810 11811 11812 11813 11814 11815 11816 11817 11818 11819 11820 11821 11822 11823 11824 11825 11826 11827 11828 11829 11830 11831 11832 11833 11834 11835 11836 11837 11838 11839 11840 11841 11842 11843 11844 11845 11846 11847 11848 11849 11850 11851 11852 11853 11854 11855 11856 11857 11858 11859 11860 11861 11862 11863 11864 11865 11866 11867 11868 11869 11870 11871 11872 11873 11874 11875 11876 11877 11878 11879 11880 11881 11882 11883 11884 11885 11886 11887 11888 11889 11890 11891 11892 11893 11894 11895 11896 11897 11898 11899 11900 11901 11902 11903 11904 11905 11906 11907 11908 11909 11910 11911 11912 11913 11914 11915 11916 11917 11918 11919 11920 11921 11922 11923 11924 11925 11926 11927 11928 11929 11930 11931 11932 11933 11934 11935 11936 11937 11938 11939 11940 11941 11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 11977 11978 11979 11980 11981 11982 11983 11984 11985 11986 11987 11988 11989 11990 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 12006 12007 12008 12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 12023 12024 12025 12026 12027 12028 12029 12030 12031 12032 12033 12034 12035 12036 12037 12038 12039 12040 12041 12042 12043 12044 12045 12046 12047 12048 12049 12050 12051 12052 12053 12054 12055 12056 12057 12058 12059 12060 12061 12062 12063 12064 12065 12066 12067 12068 12069 12070 12071 12072 12073 12074 12075 12076 12077 12078 12079 12080 12081 12082 12083 12084 12085 12086 12087 12088 12089 12090 12091 12092 12093 12094 12095 12096 12097 12098 12099 12100 12101 12102 12103 12104 12105 12106 12107 12108 12109 12110 12111 12112 12113 12114 12115 12116 12117 12118 12119 12120 12121 12122 12123 12124 12125 12126 12127 12128 12129 12130 12131 12132 12133 12134 12135 12136 12137 12138 12139 12140 12141 12142 12143 12144 12145 12146 12147 12148 12149 12150 12151 12152 12153 12154 12155 12156 12157 12158 12159 12160 12161 12162 12163 12164 12165 12166 12167 12168 12169 12170 12171 12172 12173 12174 12175 12176 12177 12178 12179 12180 12181 12182 12183 12184 12185 12186 12187 12188 12189 12190 12191 12192 12193 12194 12195 12196 12197 12198 12199 12200 12201 12202 12203 12204 12205 12206 12207 12208 12209 12210 12211 12212 12213 12214 12215 12216 12217 12218 12219 12220 12221 12222 12223 12224 12225 12226 12227 12228 12229 12230 12231 12232 12233 12234 12235 12236 12237 12238 12239 12240 12241 12242 12243 12244 12245 12246 12247 12248 12249 12250 12251 12252 12253 12254 12255 12256 12257 12258 12259 12260 12261 12262 12263 12264 12265 12266 12267 12268 12269 12270 12271 12272 12273 12274 12275 12276 12277 12278 12279 12280 12281 12282 12283 12284 12285 12286 12287 12288 12289 12290 12291 12292 12293 12294 12295 12296 12297 12298 12299 12300 12301 12302 12303 12304 12305 12306 12307 12308 12309 12310 12311 12312 12313 12314 12315 12316 12317 12318 12319 12320 12321 12322 12323 12324 12325 12326 12327 12328 12329 12330 12331 12332 12333 12334 12335 12336 12337 12338 12339 12340 12341 12342 12343 12344 12345 12346 12347 12348 12349 12350 12351 12352 12353 12354 12355 12356 12357 12358 12359 12360 12361 12362 12363 12364 12365 12366 12367 12368 12369 12370 12371 12372 12373 12374 12375 12376 12377 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 12388 12389 12390 12391 12392 12393 12394 12395 12396 12397 12398 12399 12400 12401 12402 12403 12404 12405 12406 12407 12408 12409 12410 12411 12412 12413 12414 12415 12416 12417 12418 12419 12420 12421 12422 12423 12424 12425 12426 12427 12428 12429 12430 12431 12432 12433 12434 12435 12436 12437 12438 12439 12440 12441 12442 12443 12444 12445 12446 12447 12448 12449 12450 12451 12452 12453 12454 12455 12456 12457 12458 12459 12460 12461 12462 12463 12464 12465 12466 12467 12468 12469 12470 12471 12472 12473 12474 12475 12476 12477 12478 12479 12480 12481 12482 12483 12484 12485 12486 12487 12488 12489 12490 12491 12492 12493 12494 12495 12496 12497 12498 12499 12500 12501 12502 12503 12504 12505 12506 12507 12508 12509 12510 12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 12523 12524 12525 12526 12527 12528 12529 12530 12531 12532 12533 12534 12535 12536 12537 12538 12539 12540 12541 12542 12543 12544 12545 12546 12547 12548 12549 12550 12551 12552 12553 12554 12555 12556 12557 12558 12559 12560 12561 12562 12563 12564 12565 12566 12567 12568 12569 12570 12571 12572 12573 12574 12575 12576 12577 12578 12579 12580 12581 12582 12583 12584 12585 12586 12587 12588 12589 12590 12591 12592 12593 12594 12595 12596 12597 12598 12599 12600 12601 12602 12603 12604 12605 12606 12607 12608 12609 12610 12611 12612 12613 12614 12615 12616 12617 12618 12619 12620 12621 12622 12623 12624 12625 12626 12627 12628 12629 12630 12631 12632 12633 12634 12635 12636 12637 12638 12639 12640 12641 12642 12643 12644 12645 12646 12647 12648 12649 12650 12651 12652 12653 12654 12655 12656 12657 12658 12659 12660 12661 12662 12663 12664 12665 12666 12667 12668 12669 12670 12671 12672 12673 12674 12675 12676 12677 12678 12679 12680 12681 12682 12683 12684 12685 12686 12687 12688 12689 12690 12691 12692 12693 12694 12695 12696 12697 12698 12699 12700 12701 12702 12703 12704 12705 12706 12707 12708 12709 12710 12711 12712 12713 12714 12715 12716 12717 12718 12719 12720 12721 12722 12723 12724 12725 12726 12727 12728 12729 12730 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 12747 12748 12749 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 12812 12813 12814 12815 12816 12817 12818 12819 12820 12821 12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 12885 12886 12887 12888 12889 12890 12891 12892 12893 12894 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 12913 12914 12915 12916 12917 12918 12919 12920 12921 12922 12923 12924 12925 12926 12927 12928 12929 12930 12931 12932 12933 12934 12935 12936 12937 12938 12939 12940 12941 12942 12943 12944 12945 12946 12947 12948 12949 12950 12951 12952 12953 12954 12955 12956 12957 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 12988 12989 12990 12991 12992 12993 12994 12995 12996 12997 12998 12999 13000 13001 13002 13003 13004 13005 13006 13007 13008 13009 13010 13011 13012 13013 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 13032 13033 13034 13035 13036 13037 13038 13039 13040 13041 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 13132 13133 13134 13135 13136 13137 13138 13139 13140 13141 13142 13143 13144 13145 13146 13147 13148 13149 13150 13151 13152 13153 13154 13155 13156 13157 13158 13159 13160 13161 13162 13163 13164 13165 13166 13167 13168 13169 13170 13171 13172 13173 13174 13175 13176 13177 13178 13179 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 13213 13214 13215 13216 13217 13218 13219 13220 13221 13222 13223 13224 13225 13226 13227 13228 13229 13230 13231 13232 13233 13234 13235 13236 13237 13238 13239 13240 13241 13242 13243 13244 13245 13246 13247 13248 13249 13250 13251 13252 13253 13254 13255 13256 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 13272 13273 13274 13275 13276 13277 13278 13279 13280 13281 13282 13283 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 13294 13295 13296 13297 13298 13299 13300 13301 13302 13303 13304 13305 13306 13307 13308 13309 13310 13311 13312 13313 13314 13315 13316 13317 13318 13319 13320 13321 13322 13323 13324 13325 13326 13327 13328 13329 13330 13331 13332 13333 13334 13335 13336 13337 13338 13339 13340 13341 13342 13343 13344 13345 13346 13347 13348 13349 13350 13351 13352 13353 13354 13355 13356 13357 13358 13359 13360 13361 13362 13363 13364 13365 13366 13367 13368 13369 13370 13371 13372 13373 13374 13375 13376 13377 13378 13379 13380 13381 13382 13383 13384 13385 13386 13387 13388 13389 13390 13391 13392 13393 13394 13395 13396 13397 13398 13399 13400 13401 13402 13403 13404 13405 13406 13407 13408 13409 13410 13411 13412 13413 13414 13415 13416 13417 13418 13419 13420 13421 13422 13423 13424 13425 13426 13427 13428 13429 13430 13431 13432 13433 13434 13435 13436 13437 13438 13439 13440 13441 13442 13443 13444 13445 13446 13447 13448 13449 13450 13451 13452 13453 13454 13455 13456 13457 13458 13459 13460 13461 13462 13463 13464 13465 13466 13467 13468 13469 13470 13471 13472 13473 13474 13475 13476 13477 13478 13479 13480 13481 13482 13483 13484 13485 13486 13487 13488 13489 13490 13491 13492 13493 13494 13495 13496 13497 13498 13499 13500 13501 13502 13503 13504 13505 13506 13507 13508 13509 13510 13511 13512 13513 13514 13515 13516 13517 13518 13519 13520 13521 13522 13523 13524 13525 13526 13527 13528 13529 13530 13531 13532 13533 13534 13535 13536 13537 13538 13539 13540 13541 13542 13543 13544 13545 13546 13547 13548 13549 13550 13551 13552 13553 13554 13555 13556 13557 13558 13559 13560 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 13581 13582 13583 13584 13585 13586 13587 13588 13589 13590 13591 13592 13593 13594 13595 13596 13597 13598 13599 13600 13601 13602 13603 13604 13605 13606 13607 13608 13609 13610 13611 13612 13613 13614 13615 13616 13617 13618 13619 13620 13621 13622 13623 13624 13625 13626 13627 13628 13629 13630 13631 13632 13633 13634 13635 13636 13637 13638 13639 13640 13641 13642 13643 13644 13645 13646 13647 13648 13649 13650 13651 13652 13653 13654 13655 13656 13657 13658 13659 13660 13661 13662 13663 13664 13665 13666 13667 13668 13669 13670 13671 13672 13673 13674 13675 13676 13677 13678 13679 13680 13681 13682 13683 13684 13685 13686 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 13704 13705 13706 13707 13708 13709 13710 13711 13712 13713 13714 13715 13716 13717 13718 13719 13720 13721 13722 13723 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 13740 13741 13742 13743 13744 13745 13746 13747 13748 13749 13750 13751 13752 13753 13754 13755 13756 13757 13758 13759 13760 13761 13762 13763 13764 13765 13766 13767 13768 13769 13770 13771 13772 13773 13774 13775 13776 13777 13778 13779 13780 13781 13782 13783 13784 13785 13786 13787 13788 13789 13790 13791 13792 13793 13794 13795 13796 13797 13798 13799 13800 13801 13802 13803 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 13819 13820 13821 13822 13823 13824 13825 13826 13827 13828 13829 13830 13831 13832 13833 13834 13835 13836 13837 13838 13839 13840 13841 13842 13843 13844 13845 13846 13847 13848 13849 13850 13851 13852 13853 13854 13855 13856 13857 13858 13859 13860 13861 13862 13863 13864 13865 13866 13867 13868 13869 13870 13871 13872 13873 13874 13875 13876 13877 13878 13879 13880 13881 13882 13883 13884 13885 13886 13887 13888 13889 13890 13891 13892 13893 13894 13895 13896 13897 13898 13899 13900 13901 13902 13903 13904 13905 13906 13907 13908 13909 13910 13911 13912 13913 13914 13915 13916 13917 13918 13919 13920 13921 13922 13923 13924 13925 13926 13927 13928 13929 13930 13931 13932 13933 13934 13935 13936 13937 13938 13939 13940 13941 13942 13943 13944 13945 13946 13947 13948 13949 13950 13951 13952 13953 13954 13955 13956 13957 13958 13959 13960 13961 13962 13963 13964 13965 13966 13967 13968 13969 13970 13971 13972 13973 13974 13975 13976 13977 13978 13979 13980 13981 13982 13983 13984 13985 13986 13987 13988 13989 13990 13991 13992 13993 13994 13995 13996 13997 13998 13999 14000 14001 14002 14003 14004 14005 14006 14007 14008 14009 14010 14011 14012 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 14041 14042 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 14057 14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 14072 14073 14074 14075 14076 14077 14078 14079 14080 14081 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 14132 14133 14134 14135 14136 14137 14138 14139 14140 14141 14142 14143 14144 14145 14146 14147 14148 14149 14150 14151 14152 14153 14154 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 14175 14176 14177 14178 14179 14180 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 14256 14257 14258 14259 14260 14261 14262 14263 14264 14265 14266 14267 14268 14269 14270 14271 14272 14273 14274 14275 14276 14277 14278 14279 14280 14281 14282 14283 14284 14285 14286 14287 14288 14289 14290 14291 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 14437 14438 14439 14440 14441 14442 14443 14444 14445 14446 14447 14448 14449 14450 14451 14452 14453 14454 14455 14456 14457 14458 14459 14460 14461 14462 14463 14464 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 14480 14481 14482 14483 14484 14485 14486 14487 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 14604 14605 14606 14607 14608 14609 14610 14611 14612 14613 14614 14615 14616 14617 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 14687 14688 14689 14690 14691 14692 14693 14694 14695 14696 14697 14698 14699 14700 14701 14702 14703 14704 14705 14706 14707 14708 14709 14710 14711 14712 14713 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 14759 14760 14761 14762 14763 14764 14765 14766 14767 14768 14769 14770 14771 14772 14773 14774 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 14836 14837 14838 14839 14840 14841 14842 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 14904 14905 14906 14907 14908 14909 14910 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 14940 14941 14942 14943 14944 14945 14946 14947 14948 14949 14950 14951 14952 14953 14954 14955 14956 14957 14958 14959 14960 14961 14962 14963 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 14982 14983 14984 14985 14986 14987 14988 14989 14990 14991 14992 14993 14994 14995 14996 14997 14998 14999 15000 15001 15002 15003 15004 15005 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 15080 15081 15082 15083 15084 15085 15086 15087 15088 15089 15090 15091 15092 15093 15094 15095 15096 15097 15098 15099 15100 15101 15102 15103 15104 15105 15106 15107 15108 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 15148 15149 15150 15151 15152 15153 15154 15155 15156 15157 15158 15159 15160 15161 15162 15163 15164 15165 15166 15167 15168 15169 15170 15171 15172 15173 15174 15175 15176 15177 15178 15179 15180 15181 15182 15183 15184 15185 15186 15187 15188 15189 15190 15191 15192 15193 15194 15195 15196 15197 15198 15199 15200 15201 15202 15203 15204 15205 15206 15207 15208 15209 15210 15211 15212 15213 15214 15215 15216 15217 15218 15219 15220 15221 15222 15223 15224 15225 15226 15227 15228 15229 15230 15231 15232 15233 15234 15235 15236 15237 15238 15239 15240 15241 15242 15243 15244 15245 15246 15247 15248 15249 15250 15251 15252 15253 15254 15255 15256 15257 15258 15259 15260 15261 15262 15263 15264 15265 15266 15267 15268 15269 15270 15271 15272 15273 15274 15275 15276 15277 15278 15279 15280 15281 15282 15283 15284 15285 15286 15287 15288 15289 15290 15291 15292 15293 15294 15295 15296 15297 15298 15299 15300 15301 15302 15303 15304 15305 15306 15307 15308 15309 15310 15311 15312 15313 15314 15315 15316 15317 15318 15319 15320 15321 15322 15323 15324 15325 15326 15327 15328 15329 15330 15331 15332 15333 15334 15335 15336 15337 15338 15339 15340 15341 15342 15343 15344 15345 15346 15347 15348 15349 15350 15351 15352 15353 15354 15355 15356 15357 15358 15359 15360 15361 15362 15363 15364 15365 15366 15367 15368 15369 15370 15371 15372 15373 15374 15375 15376 15377 15378 15379 15380 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 15398 15399 15400 15401 15402 15403 15404 15405 15406 15407 15408 15409 15410 15411 15412 15413 15414 15415 15416 15417 15418 15419 15420 15421 15422 15423 15424 15425 15426 15427 15428 15429 15430 15431 15432 15433 15434 15435 15436 15437 15438 15439 15440 15441 15442 15443 15444 15445 15446 15447 15448 15449 15450 15451 15452 15453 15454 15455 15456 15457 15458 15459 15460 15461 15462 15463 15464 15465 15466 15467 15468 15469 15470 15471 15472 15473 15474 15475 15476 15477 15478 15479 15480 15481 15482 15483 15484 15485 15486 15487 15488 15489 15490 15491 15492 15493 15494 15495 15496 15497 15498 15499 15500 15501 15502 15503 15504 15505 15506 15507 15508 15509 15510 15511 15512 15513 15514 15515 15516 15517 15518 15519 15520 15521 15522 15523 15524 15525 15526 15527 15528 15529 15530 15531 15532 15533 15534 15535 15536 15537 15538 15539 15540 15541 15542 15543 15544 15545 15546 15547 15548 15549 15550 15551 15552 15553 15554 15555 15556 15557 15558 15559 15560 15561 15562 15563 15564 15565 15566 15567 15568 15569 15570 15571 15572 15573 15574 15575 15576 15577 15578 15579 15580 15581 15582 15583 15584 15585 15586 15587 15588 15589 15590 15591 15592 15593 15594 15595 15596 15597 15598 15599 15600 15601 15602 15603 15604 15605 15606 15607 15608 15609 15610 15611 15612 15613 15614 15615 15616 15617 15618 15619 15620 15621 15622 15623 15624 15625 15626 15627 15628 15629 15630 15631 15632 15633 15634 15635 15636 15637 15638 15639 15640 15641 15642 15643 15644 15645 15646 15647 15648 15649 15650 15651 15652 15653 15654 15655 15656 15657 15658 15659 15660 15661 15662 15663 15664 15665 15666 15667 15668 15669 15670 15671 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 15684 15685 15686 15687 15688 15689 15690 15691 15692 15693 15694 15695 15696 15697 15698 15699 15700 15701 15702 15703 15704 15705 15706 15707 15708 15709 15710 15711 15712 15713 15714 15715 15716 15717 15718 15719 15720 15721 15722 15723 15724 15725 15726 15727 15728 15729 15730 15731 15732 15733 15734 15735 15736 15737 15738 15739 15740 15741 15742 15743 15744 15745 15746 15747 15748 15749 15750 15751 15752 15753 15754 15755 15756 15757 15758 15759 15760 15761 15762 15763 15764 15765 15766 15767 15768 15769 15770 15771 15772 15773 15774 15775 15776 15777 15778 15779 15780 15781 15782 15783 15784 15785 15786 15787 15788 15789 15790 15791 15792 15793 15794 15795 15796 15797 15798 15799 15800 15801 15802 15803 15804 15805 15806 15807 15808 15809 15810 15811 15812 15813 15814 15815 15816 15817 15818 15819 15820 15821 15822 15823 15824 15825 15826 15827 15828 15829 15830 15831 15832 15833 15834 15835 15836 15837 15838 15839 15840 15841 15842 15843 15844 15845 15846 15847 15848 15849 15850 15851 15852 15853 15854 15855 15856 15857 15858 15859 15860 15861 15862 15863 15864 15865 15866 15867 15868 15869 15870 15871 15872 15873 15874 15875 15876 15877 15878 15879 15880 15881 15882 15883 15884 15885 15886 15887 15888 15889 15890 15891 15892 15893 15894 15895 15896 15897 15898 15899 15900 15901 15902 15903 15904 15905 15906 15907 15908 15909 15910 15911 15912 15913 15914 15915 15916 15917 15918 15919 15920 15921 15922 15923 15924 15925 15926 15927 15928 15929 15930 15931 15932 15933 15934 15935 15936 15937 15938 15939 15940 15941 15942 15943 15944 15945 15946 15947 15948 15949 15950 15951 15952 15953 15954 15955 15956 15957 15958 15959 15960 15961 15962 15963 15964 15965 15966 15967 15968 15969 15970 15971 15972 15973 15974 15975 15976 15977 15978 15979 15980 15981 15982 15983 15984 15985 15986 15987 15988 15989 15990 15991 15992 15993 15994 15995 15996 15997 15998 15999 16000 16001 16002 16003 16004 16005 16006 16007 16008 16009 16010 16011 16012 16013 16014 16015 16016 16017 16018 16019 16020 16021 16022 16023 16024 16025 16026 16027 16028 16029 16030 16031 16032 16033 16034 16035 16036 16037 16038 16039 16040 16041 16042 16043 16044 16045 16046 16047 16048 16049 16050 16051 16052 16053 16054 16055 16056 16057 16058 16059 16060 16061 16062 16063 16064 16065 16066 16067 16068 16069 16070 16071 16072 16073 16074 16075 16076 16077 16078 16079 16080 16081 16082 16083 16084 16085 16086 16087 16088 16089 16090 16091 16092 16093 16094 16095 16096 16097 16098 16099 16100 16101 16102 16103 16104 16105 16106 16107 16108 16109 16110 16111 16112 16113 16114 16115 16116 16117 16118 16119 16120 16121 16122 16123 16124 16125 16126 16127 16128 16129 16130 16131 16132 16133 16134 16135 16136 16137 16138 16139 16140 16141 16142 16143 16144 16145 16146 16147 16148 16149 16150 16151 16152 16153 16154 16155 16156 16157 16158 16159 16160 16161 16162 16163 16164 16165 16166 16167 16168 16169 16170 16171 16172 16173 16174 16175 16176 16177 16178 16179 16180 16181 16182 16183 16184 16185 16186 16187 16188 16189 16190 16191 16192 16193 16194 16195 16196 16197 16198 16199 16200 16201 16202 16203 16204 16205 16206 16207 16208 16209 16210 16211 16212 16213 16214 16215 16216 16217 16218 16219 16220 16221 16222 16223 16224 16225 16226 16227 16228 16229 16230 16231 16232 16233 16234 16235 16236 16237 16238 16239 16240 16241 16242 16243 16244 16245 16246 16247 16248 16249 16250 16251 16252 16253 16254 16255 16256 16257 16258 16259 16260 16261 16262 16263 16264 16265 16266 16267 16268 16269 16270 16271 16272 16273 16274 16275 16276 16277 16278 16279 16280 16281 16282 16283 16284 16285 16286 16287 16288 16289 16290 16291 16292 16293 16294 16295 16296 16297 16298 16299 16300 16301 16302 16303 16304 16305 16306 16307 16308 16309 16310 16311 16312 16313 16314 16315 16316 16317 16318 16319 16320 16321 16322 16323 16324 16325 16326 16327 16328 16329 16330 16331 16332 16333 16334 16335 16336 16337 16338 16339 16340 16341 16342 16343 16344 16345 16346 16347 16348 16349 16350 16351 16352 16353 16354 16355 16356 16357 16358 16359 16360 16361 16362 16363 16364 16365 16366 16367 16368 16369 16370 16371 16372 16373 16374 16375 16376 16377 16378 16379 16380 16381 16382 16383 16384 16385 16386 16387 16388 16389 16390 16391 16392 16393 16394 16395 16396 16397 16398 16399 16400 16401 16402 16403 16404 16405 16406 16407 16408 16409 16410 16411 16412 16413 16414 16415 16416 16417 16418 16419 16420 16421 16422 16423 16424 16425 16426 16427 16428 16429 16430 16431 16432 16433 16434 16435 16436 16437 16438 16439 16440 16441 16442 16443 16444 16445 16446 16447 16448 16449 16450 16451 16452 16453 16454 16455 16456 16457 16458 16459 16460 16461 16462 16463 16464 16465 16466 16467 16468 16469 16470 16471 16472 16473 16474 16475 16476 16477 16478 16479 16480 16481 16482 16483 16484 16485 16486 16487 16488 16489 16490 16491 16492 16493 16494 16495 16496 16497 16498 16499 16500 16501 16502 16503 16504 16505 16506 16507 16508 16509 16510 16511 16512 16513 16514 16515 16516 16517 16518 16519 16520 16521 16522 16523 16524 16525 16526 16527 16528 16529 16530 16531 16532 16533 16534 16535 16536 16537 16538 16539 16540 16541 16542 16543 16544 16545 16546 16547 16548 16549 16550 16551 16552 16553 16554 16555 16556 16557 16558 16559 16560 16561 16562 16563 16564 16565 16566 16567 16568 16569 16570 16571 16572 16573 16574 16575 16576 16577 16578 16579 16580 16581 16582 16583 16584 16585 16586 16587 16588 16589 16590 16591 16592 16593 16594 16595 16596 16597 16598 16599 16600 16601 16602 16603 16604 16605 16606 16607 16608 16609 16610 16611 16612 16613 16614 16615 16616 16617 16618 16619 16620 16621 16622 16623 16624 16625 16626 16627 16628 16629 16630 16631 16632 16633 16634 16635 16636 16637 16638 16639 16640 16641 16642 16643 16644 16645 16646 16647 16648 16649 16650 16651 16652 16653 16654 16655 16656 16657 16658 16659 16660 16661 16662 16663 16664 16665 16666 16667 16668 16669 16670 16671 16672 16673 16674 16675 16676 16677 16678 16679 16680 16681 16682 16683 16684 16685 16686 16687 16688 16689 16690 16691 16692 16693 16694 16695 16696 16697 16698 16699 16700 16701 16702 16703 16704 16705 16706 16707 16708 16709 16710 16711 16712 16713 16714 16715 16716 16717 16718 16719 16720 16721 16722 16723 16724 16725 16726 16727 16728 16729 16730 16731 16732 16733 16734 16735 16736 16737 16738 16739 16740 16741 16742 16743 16744 16745 16746 16747 16748 16749 16750 16751 16752 16753 16754 16755 16756 16757 16758 16759 16760 16761 16762 16763 16764 16765 16766 16767 16768 16769 16770 16771 16772 16773 16774 16775 16776 16777 16778 16779 16780 16781 16782 16783 16784 16785 16786 16787 16788 16789 16790 16791 16792 16793 16794 16795 16796 16797 16798 16799 16800 16801 16802 16803 16804 16805 16806 16807 16808 16809 16810 16811 16812 16813 16814 16815 16816 16817 16818 16819 16820 16821 16822 16823 16824 16825 16826 16827 16828 16829 16830 16831 16832 16833 16834 16835 16836 16837 16838 16839 16840 16841 16842 16843 16844 16845 16846 16847 16848 16849 16850 16851 16852 16853 16854 16855 16856 16857 16858 16859 16860 16861 16862 16863 16864 16865 16866 16867 16868 16869 16870 16871 16872 16873 16874 16875 16876 16877 16878 16879 16880 16881 16882 16883 16884 16885 16886 16887 16888 16889 16890 16891 16892 16893 16894 16895 16896 16897 16898 16899 16900 16901 16902 16903 16904 16905 16906 16907 16908 16909 16910 16911 16912 16913 16914 16915 16916 16917 16918 16919 16920 16921 16922 16923 16924 16925 16926 16927 16928 16929 16930 16931 16932 16933 16934 16935 16936 16937 16938 16939 16940 16941 16942 16943 16944 16945 16946 16947 16948 16949 16950 16951 16952 16953 16954 16955 16956 16957 16958 16959 16960 16961 16962 16963 16964 16965 16966 16967 16968 16969 16970 16971 16972 16973 16974 16975 16976 16977 16978 16979 16980 16981 16982 16983 16984 16985 16986 16987 16988 16989 16990 16991 16992 16993 16994 16995 16996 16997 16998 16999 17000 17001 17002 17003 17004 17005 17006 17007 17008 17009 17010 17011 17012 17013 17014 17015 17016 17017 17018 17019 17020 17021 17022 17023 17024 17025 17026 17027 17028 17029 17030 17031 17032 17033 17034 17035 17036 17037 17038 17039 17040 17041 17042 17043 17044 17045 17046 17047 17048 17049 17050 17051 17052 17053 17054 17055 17056 17057 17058 17059 17060 17061 17062 17063 17064 17065 17066 17067 17068 17069 17070 17071 17072 17073 17074 17075 17076 17077 17078 17079 17080 17081 17082 17083 17084 17085 17086 17087 17088 17089 17090 17091 17092 17093 17094 17095 17096 17097 17098 17099 17100 17101 17102 17103 17104 17105 17106 17107 17108 17109 17110 17111 17112 17113 17114 17115 17116 17117 17118 17119 17120 17121 17122 17123 17124 17125 17126 17127 17128 17129 17130 17131 17132 17133 17134 17135 17136 17137 17138 17139 17140 17141 17142 17143 17144 17145 17146 17147 17148 17149 17150 17151 17152 17153 17154 17155 17156 17157 17158 17159 17160 17161 17162 17163 17164 17165 17166 17167 17168 17169 17170 17171 17172 17173 17174 17175 17176 17177 17178 17179 17180 17181 17182 17183 17184 17185 17186 17187 17188 17189 17190 17191 17192 17193 17194 17195 17196 17197 17198 17199 17200 17201 17202 17203 17204 17205 17206 17207 17208 17209 17210 17211 17212 17213 17214 17215 17216 17217 17218 17219 17220 17221 17222 17223 17224 17225 17226 17227 17228 17229 17230 17231 17232 17233 17234 17235 17236 17237 17238 17239 17240 17241 17242 17243 17244 17245 17246 17247 17248 17249 17250 17251 17252 17253 17254 17255 17256 17257 17258 17259 17260 17261 17262 17263 17264 17265 17266 17267 17268 17269 17270 17271 17272 17273 17274 17275 17276 17277 17278 17279 17280 17281 17282 17283 17284 17285 17286 17287 17288 17289 17290 17291 17292 17293 17294 17295 17296 17297 17298 17299 17300 17301 17302 17303 17304 17305 17306 17307 17308 17309 17310 17311 17312 17313 17314 17315 17316 17317 17318 17319 17320 17321 17322 17323 17324 17325 17326 17327 17328 17329 17330 17331 17332 17333 17334 17335 17336 17337 17338 17339 17340 17341 17342 17343 17344 17345 17346 17347 17348 17349 17350 17351 17352 17353 17354 17355 17356 17357 17358 17359 17360 17361 17362 17363 17364 17365 17366 17367 17368 17369 17370 17371 17372 17373 17374 17375 17376 17377 17378 17379 17380 17381 17382 17383 17384 17385 17386 17387 17388 17389 17390 17391 17392 17393 17394 17395 17396 17397 17398 17399 17400 17401 17402 17403 17404 17405 17406 17407 17408 17409 17410 17411 17412 17413 17414 17415 17416 17417 17418 17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 17433 17434 17435 17436 17437 17438 17439 17440 17441 17442 17443 17444 17445 17446 17447 17448 17449 17450 17451 17452 17453 17454 17455 17456 17457 17458 17459 17460 17461 17462 17463 17464 17465 17466 17467 17468 17469 17470 17471 17472 17473 17474 17475 17476 17477 17478 17479 17480 17481 17482 17483 17484 17485 17486 17487 17488 17489 17490 17491 17492 17493 17494 17495 17496 17497 17498 17499 17500 17501 17502 17503 17504 17505 17506 17507 17508 17509 17510 17511 17512 17513 17514 17515 17516 17517 17518 17519 17520 17521 17522 17523 17524 17525 17526 17527 17528 17529 17530 17531 17532 17533 17534 17535 17536 17537 17538 17539 17540 17541 17542 17543 17544 17545 17546 17547 17548 17549 17550 17551 17552 17553 17554 17555 17556 17557 17558 17559 17560 17561 17562 17563 17564 17565 17566 17567 17568 17569 17570 17571 17572 17573 17574 17575 17576 17577 17578 17579 17580 17581 17582 17583 17584 17585 17586 17587 17588 17589 17590 17591 17592 17593 17594 17595 17596 17597 17598 17599 17600 17601 17602 17603 17604 17605 17606 17607 17608 17609 17610 17611 17612 17613 17614 17615 17616 17617 17618 17619 17620 17621 17622 17623 17624 17625 17626 17627 17628 17629 17630 17631 17632 17633 17634 17635 17636 17637 17638 17639 17640 17641 17642 17643 17644 17645 17646 17647 17648 17649 17650 17651 17652 17653 17654 17655 17656 17657 17658 17659 17660 17661 17662 17663 17664 17665 17666 17667 17668 17669 17670 17671 17672 17673 17674 17675 17676 17677 17678 17679 17680 17681 17682 17683 17684 17685 17686 17687 17688 17689 17690 17691 17692 17693 17694 17695 17696 17697 17698 17699 17700 17701 17702 17703 17704 17705 17706 17707 17708 17709 17710 17711 17712 17713 17714 17715 17716 17717 17718 17719 17720 17721 17722 17723 17724 17725 17726 17727 17728 17729 17730 17731 17732 17733 17734 17735 17736 17737 17738 17739 17740 17741 17742 17743 17744 17745 17746 17747 17748 17749 17750 17751 17752 17753 17754 17755 17756 17757 17758 17759 17760 17761 17762 17763 17764 17765 17766 17767 17768 17769 17770 17771 17772 17773 17774 17775 17776 17777 17778 17779 17780 17781 17782 17783 17784 17785 17786 17787 17788 17789 17790 17791 17792 17793 17794 17795 17796 17797 17798 17799 17800 17801 17802 17803 17804 17805 17806 17807 17808 17809 17810 17811 17812 17813 17814 17815 17816 17817 17818 17819 17820 17821 17822 17823 17824 17825 17826 17827 17828 17829 17830 17831 17832 17833 17834 17835 17836 17837 17838 17839 17840 17841 17842 17843 17844 17845 17846 17847 17848 17849 17850 17851 17852 17853 17854 17855 17856 17857 17858 17859 17860 17861 17862 17863 17864 17865 17866 17867 17868 17869 17870 17871 17872 17873 17874 17875 17876 17877 17878 17879 17880 17881 17882 17883 17884 17885 17886 17887 17888 17889 17890 17891 17892 17893 17894 17895 17896 17897 17898 17899 17900 17901 17902 17903 17904 17905 17906 17907 17908 17909 17910 17911 17912 17913 17914 17915 17916 17917 17918 17919 17920 17921 17922 17923 17924 17925 17926 17927 17928 17929 17930 17931 17932 17933 17934 17935 17936 17937 17938 17939 17940 17941 17942 17943 17944 17945 17946 17947 17948 17949 17950 17951 17952 17953 17954 17955 17956 17957 17958 17959 17960 17961 17962 17963 17964 17965 17966 17967 17968 17969 17970 17971 17972 17973 17974 17975 17976 17977 17978 17979 17980 17981 17982 17983 17984 17985 17986 17987 17988 17989 17990 17991 17992 17993 17994 17995 17996 17997 17998 17999 18000 18001 18002 18003 18004 18005 18006 18007 18008 18009 18010 18011 18012 18013 18014 18015 18016 18017 18018 18019 18020 18021 18022 18023 18024 18025 18026 18027 18028 18029 18030 18031 18032 18033 18034 18035 18036 18037 18038 18039 18040 18041 18042 18043 18044 18045 18046 18047 18048 18049 18050 18051 18052 18053 18054 18055 18056 18057 18058 18059 18060 18061 18062 18063 18064 18065 18066 18067 18068 18069 18070 18071 18072 18073 18074 18075 18076 18077 18078 18079 18080 18081 18082 18083 18084 18085 18086 18087 18088 18089 18090 18091 18092 18093 18094 18095 18096 18097 18098 18099 18100 18101 18102 18103 18104 18105 18106 18107 18108 18109 18110 18111 18112 18113 18114 18115 18116 18117 18118 18119 18120 18121 18122 18123 18124 18125 18126 18127 18128 18129 18130 18131 18132 18133 18134 18135 18136 18137 18138 18139 18140 18141 18142 18143 18144 18145 18146 18147 18148 18149 18150 18151 18152 18153 18154 18155 18156 18157 18158 18159 18160 18161 18162 18163 18164 18165 18166 18167 18168 18169 18170 18171 18172 18173 18174 18175 18176 18177 18178 18179 18180 18181 18182 18183 18184 18185 18186 18187 18188 18189 18190 18191 18192 18193 18194 18195 18196 18197 18198 18199 18200 18201 18202 18203 18204 18205 18206 18207 18208 18209 18210 18211 18212 18213 18214 18215 18216 18217 18218 18219 18220 18221 18222 18223 18224 18225 18226 18227 18228 18229 18230 18231 18232 18233 18234 18235 18236 18237 18238 18239 18240 18241 18242 18243 18244 18245 18246 18247 18248 18249 18250 18251 18252 18253 18254 18255 18256 18257 18258 18259 18260 18261 18262 18263 18264 18265 18266 18267 18268 18269 18270 18271 18272 18273 18274 18275 18276 18277 18278 18279 18280 18281 18282 18283 18284 18285 18286 18287 18288 18289 18290 18291 18292 18293 18294 18295 18296 18297 18298 18299 18300 18301 18302 18303 18304 18305 18306 18307 18308 18309 18310 18311 18312 18313 18314 18315 18316 18317 18318 18319 18320 18321 18322 18323 18324 18325 18326 18327 18328 18329 18330 18331 18332 18333 18334 18335 18336 18337 18338 18339 18340 18341 18342 18343 18344 18345 18346 18347 18348 18349 18350 18351 18352 18353 18354 18355 18356 18357 18358 18359 18360 18361 18362 18363 18364 18365 18366 18367 18368 18369 18370 18371 18372 18373 18374 18375 18376 18377 18378 18379 18380 18381 18382 18383 18384 18385 18386 18387 18388 18389 18390 18391 18392 18393 18394 18395 18396 18397 18398 18399 18400 18401 18402 18403 18404 18405 18406 18407 18408 18409 18410 18411 18412 18413 18414 18415 18416 18417 18418 18419 18420 18421 18422 18423 18424 18425 18426 18427 18428 18429 18430 18431 18432 18433 18434 18435 18436 18437 18438 18439 18440 18441 18442 18443 18444 18445 18446 18447 18448 18449 18450 18451 18452 18453 18454 18455 18456 18457 18458 18459 18460 18461 18462 18463 18464 18465 18466 18467 18468 18469 18470 18471 18472 18473 18474 18475 18476 18477 18478 18479 18480 18481 18482 18483 18484 18485 18486 18487 18488 18489 18490 18491 18492 18493 18494 18495 18496 18497 18498 18499 18500 18501 18502 18503 18504 18505 18506 18507 18508 18509 18510 18511 18512 18513 18514 18515 18516 18517 18518 18519 18520 18521 18522 18523 18524 18525 18526 18527 18528 18529 18530 18531 18532 18533 18534 18535 18536 18537 18538 18539 18540 18541 18542 18543 18544 18545 18546 18547 18548 18549 18550 18551 18552 18553 18554 18555 18556 18557 18558 18559 18560 18561 18562 18563 18564 18565 18566 18567 18568 18569 18570 18571 18572 18573 18574 18575 18576 18577 18578 18579 18580 18581 18582 18583 18584 18585 18586 18587 18588 18589 18590 18591 18592 18593 18594 18595 18596 18597 18598 18599 18600 18601 18602 18603 18604 18605 18606 18607 18608 18609 18610 18611 18612 18613 18614 18615 18616 18617 18618 18619 18620 18621 18622 18623 18624 18625 18626 18627 18628 18629 18630 18631 18632 18633 18634 18635 18636 18637 18638 18639 18640 18641 18642 18643 18644 18645 18646 18647 18648 18649 18650 18651 18652 18653 18654 18655 18656 18657 18658 18659 18660 18661 18662 18663 18664 18665 18666 18667 18668 18669 18670 18671 18672 18673 18674 18675 18676 18677 18678 18679 18680 18681 18682 18683 18684 18685 18686 18687 18688 18689 18690 18691 18692 18693 18694 18695 18696 18697 18698 18699 18700 18701 18702 18703 18704 18705 18706 18707 18708 18709 18710 18711 18712 18713 18714 18715 18716 18717 18718 18719 18720 18721 18722 18723 18724 18725 18726 18727 18728 18729 18730 18731 18732 18733 18734 18735 18736 18737 18738 18739 18740 18741 18742 18743 18744 18745 18746 18747 18748 18749 18750 18751 18752 18753 18754 18755 18756 18757 18758 18759 18760 18761 18762 18763 18764 18765 18766 18767 18768 18769 18770 18771 18772 18773 18774 18775 18776 18777 18778 18779 18780 18781 18782 18783 18784 18785 18786 18787 18788 18789 18790 18791 18792 18793 18794 18795 18796 18797 18798 18799 18800 18801 18802 18803 18804 18805 18806 18807 18808 18809 18810 18811 18812 18813 18814 18815 18816 18817 18818 18819 18820 18821 18822 18823 18824 18825 18826 18827 18828 18829 18830 18831 18832 18833 18834 18835 18836 18837 18838 18839 18840 18841 18842 18843 18844 18845 18846 18847 18848 18849 18850 18851 18852 18853 18854 18855 18856 18857 18858 18859 18860 18861 18862 18863 18864 18865 18866 18867 18868 18869 18870 18871 18872 18873 18874 18875 18876 18877 18878 18879 18880 18881 18882 18883 18884 18885 18886 18887 18888 18889 18890 18891 18892 18893 18894 18895 18896 18897 18898 18899 18900 18901 18902 18903 18904 18905 18906 18907 18908 18909 18910 18911 18912 18913 18914 18915 18916 18917 18918 18919 18920 18921 18922 18923 18924 18925 18926 18927 18928 18929 18930 18931 18932 18933 18934 18935 18936 18937 18938 18939 18940 18941 18942 18943 18944 18945 18946 18947 18948 18949 18950 18951 18952 18953 18954 18955 18956 18957 18958 18959 18960 18961 18962 18963 18964 18965 18966 18967 18968 18969 18970 18971 18972 18973 18974 18975 18976 18977 18978 18979 18980 18981 18982 18983 18984 18985 18986 18987 18988 18989 18990 18991 18992 18993 18994 18995 18996 18997 18998 18999 19000 19001 19002 19003 19004 19005 19006 19007 19008 19009 19010 19011 19012 19013 19014 19015 19016 19017 19018 19019 19020 19021 19022 19023 19024 19025 19026 19027 19028 19029 19030 19031 19032 19033 19034 19035 19036 19037 19038 19039 19040 19041 19042 19043 19044 19045 19046 19047 19048 19049 19050 19051 19052 19053 19054 19055 19056 19057 19058 19059 19060 19061 19062 19063 19064 19065 19066 19067 19068 19069 19070 19071 19072 19073 19074 19075 19076 19077 19078 19079 19080 19081 19082 19083 19084 19085 19086 19087 19088 19089 19090 19091 19092 19093 19094 19095 19096 19097 19098 19099 19100 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19117 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 19166 19167 19168 19169 19170 19171 19172 19173 19174 19175 19176 19177 19178 19179 19180 19181 19182 19183 19184 19185 19186 19187 19188 19189 19190 19191 19192 19193 19194 19195 19196 19197 19198 19199 19200 19201 19202 19203 19204 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 19218 19219 19220 19221 19222 19223 19224 19225 19226 19227 19228 19229 19230 19231 19232 19233 19234 19235 19236 19237 19238 19239 19240 19241 19242 19243 19244 19245 19246 19247 19248 19249 19250 19251 19252 19253 19254 19255 19256 19257 19258 19259 19260 19261 19262 19263 19264 19265 19266 19267 19268 19269 19270 19271 19272 19273 19274 19275 19276 19277 19278 19279 19280 19281 19282 19283 19284 19285 19286 19287 19288 19289 19290 19291 19292 19293 19294 19295 19296 19297 19298 19299 19300 19301 19302 19303 19304 19305 19306 19307 19308 19309 19310 19311 19312 19313 19314 19315 19316 19317 19318 19319 19320 19321 19322 19323 19324 19325 19326 19327 19328 19329 19330 19331 19332 19333 19334 19335 19336 19337 19338 19339 19340 19341 19342 19343 19344 19345 19346 19347 19348 19349 19350 19351 19352 19353 19354 19355 19356 19357 19358 19359 19360 19361 19362 19363 19364 19365 19366 19367 19368 19369 19370 19371 19372 19373 19374 19375 19376 19377 19378 19379 19380 19381 19382 19383 19384 19385 19386 19387 19388 19389 19390 19391 19392 19393 19394 19395 19396 19397 19398 19399 19400 19401 19402 19403 19404 19405 19406 19407 19408 19409 19410 19411 19412 19413 19414 19415 19416 19417 19418 19419 19420 19421 19422 19423 19424 19425 19426 19427 19428 19429 19430 19431 19432 19433 19434 19435 19436 19437 19438 19439 19440 19441 19442 19443 19444 19445 19446 19447 19448 19449 19450 19451 19452 19453 19454 19455 19456 19457 19458 19459 19460 19461 19462 19463 19464 19465 19466 19467 19468 19469 19470 19471 19472 19473 19474 19475 19476 19477 19478 19479 19480 19481 19482 19483 19484 19485 19486 19487 19488 19489 19490 19491 19492 19493 19494 19495 19496 19497 19498 19499 19500 19501 19502 19503 19504 19505 19506 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 19522 19523 19524 19525 19526 19527 19528 19529 19530 19531 19532 19533 19534 19535 19536 19537 19538 19539 19540 19541 19542 19543 19544 19545 19546 19547 19548 19549 19550 19551 19552 19553 19554 19555 19556 19557 19558 19559 19560 19561 19562 19563 19564 19565 19566 19567 19568 19569 19570 19571 19572 19573 19574 19575 19576 19577 19578 19579 19580 19581 19582 19583 19584 19585 19586 19587 19588 19589 19590 19591 19592 19593 19594 19595 19596 19597 19598 19599 19600 19601 19602 19603 19604 19605 19606 19607 19608 19609 19610 19611 19612 19613 19614 19615 19616 19617 19618 19619 19620 19621 19622 19623 19624 19625 19626 19627 19628 19629 19630 19631 19632 19633 19634 19635 19636 19637 19638 19639 19640 19641 19642 19643 19644 19645 19646 19647 19648 19649 19650 19651 19652 19653 19654 19655 19656 19657 19658 19659 19660 19661 19662 19663 19664 19665 19666 19667 19668 19669 19670 19671 19672 19673 19674 19675 19676 19677 19678 19679 19680 19681 19682 19683 19684 19685 19686 19687 19688 19689 19690
|
//
// Copyright (c) 2017-2022 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
#define AMD_VULKAN_MEMORY_ALLOCATOR_H
/** \mainpage Vulkan Memory Allocator
<b>Version 3.1.0-development</b>
Copyright (c) 2017-2022 Advanced Micro Devices, Inc. All rights reserved. \n
License: MIT
<b>API documentation divided into groups:</b> [Modules](modules.html)
\section main_table_of_contents Table of contents
- <b>User guide</b>
- \subpage quick_start
- [Project setup](@ref quick_start_project_setup)
- [Initialization](@ref quick_start_initialization)
- [Resource allocation](@ref quick_start_resource_allocation)
- \subpage choosing_memory_type
- [Usage](@ref choosing_memory_type_usage)
- [Required and preferred flags](@ref choosing_memory_type_required_preferred_flags)
- [Explicit memory types](@ref choosing_memory_type_explicit_memory_types)
- [Custom memory pools](@ref choosing_memory_type_custom_memory_pools)
- [Dedicated allocations](@ref choosing_memory_type_dedicated_allocations)
- \subpage memory_mapping
- [Mapping functions](@ref memory_mapping_mapping_functions)
- [Persistently mapped memory](@ref memory_mapping_persistently_mapped_memory)
- [Cache flush and invalidate](@ref memory_mapping_cache_control)
- \subpage staying_within_budget
- [Querying for budget](@ref staying_within_budget_querying_for_budget)
- [Controlling memory usage](@ref staying_within_budget_controlling_memory_usage)
- \subpage resource_aliasing
- \subpage custom_memory_pools
- [Choosing memory type index](@ref custom_memory_pools_MemTypeIndex)
- [Linear allocation algorithm](@ref linear_algorithm)
- [Free-at-once](@ref linear_algorithm_free_at_once)
- [Stack](@ref linear_algorithm_stack)
- [Double stack](@ref linear_algorithm_double_stack)
- [Ring buffer](@ref linear_algorithm_ring_buffer)
- \subpage defragmentation
- \subpage statistics
- [Numeric statistics](@ref statistics_numeric_statistics)
- [JSON dump](@ref statistics_json_dump)
- \subpage allocation_annotation
- [Allocation user data](@ref allocation_user_data)
- [Allocation names](@ref allocation_names)
- \subpage virtual_allocator
- \subpage debugging_memory_usage
- [Memory initialization](@ref debugging_memory_usage_initialization)
- [Margins](@ref debugging_memory_usage_margins)
- [Corruption detection](@ref debugging_memory_usage_corruption_detection)
- \subpage opengl_interop
- \subpage usage_patterns
- [GPU-only resource](@ref usage_patterns_gpu_only)
- [Staging copy for upload](@ref usage_patterns_staging_copy_upload)
- [Readback](@ref usage_patterns_readback)
- [Advanced data uploading](@ref usage_patterns_advanced_data_uploading)
- [Other use cases](@ref usage_patterns_other_use_cases)
- \subpage configuration
- [Pointers to Vulkan functions](@ref config_Vulkan_functions)
- [Custom host memory allocator](@ref custom_memory_allocator)
- [Device memory allocation callbacks](@ref allocation_callbacks)
- [Device heap memory limit](@ref heap_memory_limit)
- <b>Extension support</b>
- \subpage vk_khr_dedicated_allocation
- \subpage enabling_buffer_device_address
- \subpage vk_ext_memory_priority
- \subpage vk_amd_device_coherent_memory
- \subpage general_considerations
- [Thread safety](@ref general_considerations_thread_safety)
- [Versioning and compatibility](@ref general_considerations_versioning_and_compatibility)
- [Validation layer warnings](@ref general_considerations_validation_layer_warnings)
- [Allocation algorithm](@ref general_considerations_allocation_algorithm)
- [Features not supported](@ref general_considerations_features_not_supported)
\section main_see_also See also
- [**Product page on GPUOpen**](https://gpuopen.com/gaming-product/vulkan-memory-allocator/)
- [**Source repository on GitHub**](https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator)
\defgroup group_init Library initialization
\brief API elements related to the initialization and management of the entire library, especially #VmaAllocator object.
\defgroup group_alloc Memory allocation
\brief API elements related to the allocation, deallocation, and management of Vulkan memory, buffers, images.
Most basic ones being: vmaCreateBuffer(), vmaCreateImage().
\defgroup group_virtual Virtual allocator
\brief API elements related to the mechanism of \ref virtual_allocator - using the core allocation algorithm
for user-defined purpose without allocating any real GPU memory.
\defgroup group_stats Statistics
\brief API elements that query current status of the allocator, from memory usage, budget, to full dump of the internal state in JSON format.
See documentation chapter: \ref statistics.
*/
#ifdef __cplusplus
extern "C" {
#endif
#include <vulkan/vulkan.h>
#if !defined(VMA_VULKAN_VERSION)
#if defined(VK_VERSION_1_3)
#define VMA_VULKAN_VERSION 1003000
#elif defined(VK_VERSION_1_2)
#define VMA_VULKAN_VERSION 1002000
#elif defined(VK_VERSION_1_1)
#define VMA_VULKAN_VERSION 1001000
#else
#define VMA_VULKAN_VERSION 1000000
#endif
#endif
#if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
extern PFN_vkAllocateMemory vkAllocateMemory;
extern PFN_vkFreeMemory vkFreeMemory;
extern PFN_vkMapMemory vkMapMemory;
extern PFN_vkUnmapMemory vkUnmapMemory;
extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
extern PFN_vkBindBufferMemory vkBindBufferMemory;
extern PFN_vkBindImageMemory vkBindImageMemory;
extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
extern PFN_vkCreateBuffer vkCreateBuffer;
extern PFN_vkDestroyBuffer vkDestroyBuffer;
extern PFN_vkCreateImage vkCreateImage;
extern PFN_vkDestroyImage vkDestroyImage;
extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
#if VMA_VULKAN_VERSION >= 1001000
extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
extern PFN_vkBindImageMemory2 vkBindImageMemory2;
extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
#endif // #if VMA_VULKAN_VERSION >= 1001000
#endif // #if defined(__ANDROID__) && VMA_STATIC_VULKAN_FUNCTIONS && VK_NO_PROTOTYPES
#if !defined(VMA_DEDICATED_ALLOCATION)
#if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
#define VMA_DEDICATED_ALLOCATION 1
#else
#define VMA_DEDICATED_ALLOCATION 0
#endif
#endif
#if !defined(VMA_BIND_MEMORY2)
#if VK_KHR_bind_memory2
#define VMA_BIND_MEMORY2 1
#else
#define VMA_BIND_MEMORY2 0
#endif
#endif
#if !defined(VMA_MEMORY_BUDGET)
#if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
#define VMA_MEMORY_BUDGET 1
#else
#define VMA_MEMORY_BUDGET 0
#endif
#endif
// Defined to 1 when VK_KHR_buffer_device_address device extension or equivalent core Vulkan 1.2 feature is defined in its headers.
#if !defined(VMA_BUFFER_DEVICE_ADDRESS)
#if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
#define VMA_BUFFER_DEVICE_ADDRESS 1
#else
#define VMA_BUFFER_DEVICE_ADDRESS 0
#endif
#endif
// Defined to 1 when VK_EXT_memory_priority device extension is defined in Vulkan headers.
#if !defined(VMA_MEMORY_PRIORITY)
#if VK_EXT_memory_priority
#define VMA_MEMORY_PRIORITY 1
#else
#define VMA_MEMORY_PRIORITY 0
#endif
#endif
// Defined to 1 when VK_KHR_external_memory device extension is defined in Vulkan headers.
#if !defined(VMA_EXTERNAL_MEMORY)
#if VK_KHR_external_memory
#define VMA_EXTERNAL_MEMORY 1
#else
#define VMA_EXTERNAL_MEMORY 0
#endif
#endif
// Define these macros to decorate all public functions with additional code,
// before and after returned type, appropriately. This may be useful for
// exporting the functions when compiling VMA as a separate library. Example:
// #define VMA_CALL_PRE __declspec(dllexport)
// #define VMA_CALL_POST __cdecl
#ifndef VMA_CALL_PRE
#define VMA_CALL_PRE
#endif
#ifndef VMA_CALL_POST
#define VMA_CALL_POST
#endif
// Define this macro to decorate pNext pointers with an attribute specifying the Vulkan
// structure that will be extended via the pNext chain.
#ifndef VMA_EXTENDS_VK_STRUCT
#define VMA_EXTENDS_VK_STRUCT(vkStruct)
#endif
// Define this macro to decorate pointers with an attribute specifying the
// length of the array they point to if they are not null.
//
// The length may be one of
// - The name of another parameter in the argument list where the pointer is declared
// - The name of another member in the struct where the pointer is declared
// - The name of a member of a struct type, meaning the value of that member in
// the context of the call. For example
// VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount"),
// this means the number of memory heaps available in the device associated
// with the VmaAllocator being dealt with.
#ifndef VMA_LEN_IF_NOT_NULL
#define VMA_LEN_IF_NOT_NULL(len)
#endif
// The VMA_NULLABLE macro is defined to be _Nullable when compiling with Clang.
// see: https://clang.llvm.org/docs/AttributeReference.html#nullable
#ifndef VMA_NULLABLE
#ifdef __clang__
#define VMA_NULLABLE _Nullable
#else
#define VMA_NULLABLE
#endif
#endif
// The VMA_NOT_NULL macro is defined to be _Nonnull when compiling with Clang.
// see: https://clang.llvm.org/docs/AttributeReference.html#nonnull
#ifndef VMA_NOT_NULL
#ifdef __clang__
#define VMA_NOT_NULL _Nonnull
#else
#define VMA_NOT_NULL
#endif
#endif
// If non-dispatchable handles are represented as pointers then we can give
// then nullability annotations
#ifndef VMA_NOT_NULL_NON_DISPATCHABLE
#if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
#define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
#else
#define VMA_NOT_NULL_NON_DISPATCHABLE
#endif
#endif
#ifndef VMA_NULLABLE_NON_DISPATCHABLE
#if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
#define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
#else
#define VMA_NULLABLE_NON_DISPATCHABLE
#endif
#endif
#ifndef VMA_STATS_STRING_ENABLED
#define VMA_STATS_STRING_ENABLED 1
#endif
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
//
// INTERFACE
//
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
// Sections for managing code placement in file, only for development purposes e.g. for convenient folding inside an IDE.
#ifndef _VMA_ENUM_DECLARATIONS
/**
\addtogroup group_init
@{
*/
/// Flags for created #VmaAllocator.
typedef enum VmaAllocatorCreateFlagBits
{
/** \brief Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time or synchronized externally by you.
Using this flag may increase performance because internal mutexes are not used.
*/
VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x00000001,
/** \brief Enables usage of VK_KHR_dedicated_allocation extension.
The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
Using this extension will automatically allocate dedicated blocks of memory for
some buffers and images instead of suballocating place for them out of bigger
memory blocks (as if you explicitly used #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
flag) when it is recommended by the driver. It may improve performance on some
GPUs.
You may set this flag only if you found out that following device extensions are
supported, you enabled them while creating Vulkan device passed as
VmaAllocatorCreateInfo::device, and you want them to be used internally by this
library:
- VK_KHR_get_memory_requirements2 (device extension)
- VK_KHR_dedicated_allocation (device extension)
When this flag is set, you can experience following warnings reported by Vulkan
validation layer. You can ignore them.
> vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.
*/
VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x00000002,
/**
Enables usage of VK_KHR_bind_memory2 extension.
The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
You may set this flag only if you found out that this device extension is supported,
you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
and you want it to be used internally by this library.
The extension provides functions `vkBindBufferMemory2KHR` and `vkBindImageMemory2KHR`,
which allow to pass a chain of `pNext` structures while binding.
This flag is required if you use `pNext` parameter in vmaBindBufferMemory2() or vmaBindImageMemory2().
*/
VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT = 0x00000004,
/**
Enables usage of VK_EXT_memory_budget extension.
You may set this flag only if you found out that this device extension is supported,
you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
and you want it to be used internally by this library, along with another instance extension
VK_KHR_get_physical_device_properties2, which is required by it (or Vulkan 1.1, where this extension is promoted).
The extension provides query for current memory usage and budget, which will probably
be more accurate than an estimation used by the library otherwise.
*/
VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT = 0x00000008,
/**
Enables usage of VK_AMD_device_coherent_memory extension.
You may set this flag only if you:
- found out that this device extension is supported and enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
- checked that `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true and set it while creating the Vulkan device,
- want it to be used internally by this library.
The extension and accompanying device feature provide access to memory types with
`VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flags.
They are useful mostly for writing breadcrumb markers - a common method for debugging GPU crash/hang/TDR.
When the extension is not enabled, such memory types are still enumerated, but their usage is illegal.
To protect from this error, if you don't create the allocator with this flag, it will refuse to allocate any memory or create a custom pool in such memory type,
returning `VK_ERROR_FEATURE_NOT_PRESENT`.
*/
VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT = 0x00000010,
/**
Enables usage of "buffer device address" feature, which allows you to use function
`vkGetBufferDeviceAddress*` to get raw GPU pointer to a buffer and pass it for usage inside a shader.
You may set this flag only if you:
1. (For Vulkan version < 1.2) Found as available and enabled device extension
VK_KHR_buffer_device_address.
This extension is promoted to core Vulkan 1.2.
2. Found as available and enabled device feature `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress`.
When this flag is set, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT` using VMA.
The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT` to
allocated memory blocks wherever it might be needed.
For more information, see documentation chapter \ref enabling_buffer_device_address.
*/
VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT = 0x00000020,
/**
Enables usage of VK_EXT_memory_priority extension in the library.
You may set this flag only if you found available and enabled this device extension,
along with `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority == VK_TRUE`,
while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
When this flag is used, VmaAllocationCreateInfo::priority and VmaPoolCreateInfo::priority
are used to set priorities of allocated Vulkan memory. Without it, these variables are ignored.
A priority must be a floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
Larger values are higher priority. The granularity of the priorities is implementation-dependent.
It is automatically passed to every call to `vkAllocateMemory` done by the library using structure `VkMemoryPriorityAllocateInfoEXT`.
The value to be used for default priority is 0.5.
For more details, see the documentation of the VK_EXT_memory_priority extension.
*/
VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT = 0x00000040,
VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaAllocatorCreateFlagBits;
/// See #VmaAllocatorCreateFlagBits.
typedef VkFlags VmaAllocatorCreateFlags;
/** @} */
/**
\addtogroup group_alloc
@{
*/
/// \brief Intended usage of the allocated memory.
typedef enum VmaMemoryUsage
{
/** No intended memory usage specified.
Use other members of VmaAllocationCreateInfo to specify your requirements.
*/
VMA_MEMORY_USAGE_UNKNOWN = 0,
/**
\deprecated Obsolete, preserved for backward compatibility.
Prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
*/
VMA_MEMORY_USAGE_GPU_ONLY = 1,
/**
\deprecated Obsolete, preserved for backward compatibility.
Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` and `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT`.
*/
VMA_MEMORY_USAGE_CPU_ONLY = 2,
/**
\deprecated Obsolete, preserved for backward compatibility.
Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
*/
VMA_MEMORY_USAGE_CPU_TO_GPU = 3,
/**
\deprecated Obsolete, preserved for backward compatibility.
Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
*/
VMA_MEMORY_USAGE_GPU_TO_CPU = 4,
/**
\deprecated Obsolete, preserved for backward compatibility.
Prefers not `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
*/
VMA_MEMORY_USAGE_CPU_COPY = 5,
/**
Lazily allocated GPU memory having `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT`.
Exists mostly on mobile platforms. Using it on desktop PC or other GPUs with no such memory type present will fail the allocation.
Usage: Memory for transient attachment images (color attachments, depth attachments etc.), created with `VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT`.
Allocations with this usage are always created as dedicated - it implies #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
*/
VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED = 6,
/**
Selects best memory type automatically.
This flag is recommended for most common use cases.
When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
in VmaAllocationCreateInfo::flags.
It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
and not with generic memory allocation functions.
*/
VMA_MEMORY_USAGE_AUTO = 7,
/**
Selects best memory type automatically with preference for GPU (device) memory.
When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
in VmaAllocationCreateInfo::flags.
It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
and not with generic memory allocation functions.
*/
VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE = 8,
/**
Selects best memory type automatically with preference for CPU (host) memory.
When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
in VmaAllocationCreateInfo::flags.
It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
and not with generic memory allocation functions.
*/
VMA_MEMORY_USAGE_AUTO_PREFER_HOST = 9,
VMA_MEMORY_USAGE_MAX_ENUM = 0x7FFFFFFF
} VmaMemoryUsage;
/// Flags to be passed as VmaAllocationCreateInfo::flags.
typedef enum VmaAllocationCreateFlagBits
{
/** \brief Set this flag if the allocation should have its own memory block.
Use it for special, big resources, like fullscreen images used as attachments.
*/
VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x00000001,
/** \brief Set this flag to only try to allocate from existing `VkDeviceMemory` blocks and never create new such block.
If new allocation cannot be placed in any of the existing blocks, allocation
fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
You should not use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT and
#VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT at the same time. It makes no sense.
*/
VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x00000002,
/** \brief Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
Pointer to mapped memory will be returned through VmaAllocationInfo::pMappedData.
It is valid to use this flag for allocation made from memory type that is not
`HOST_VISIBLE`. This flag is then ignored and memory is not mapped. This is
useful if you need an allocation that is efficient to use on GPU
(`DEVICE_LOCAL`) and still want to map it directly if possible on platforms that
support it (e.g. Intel GPU).
*/
VMA_ALLOCATION_CREATE_MAPPED_BIT = 0x00000004,
/** \deprecated Preserved for backward compatibility. Consider using vmaSetAllocationName() instead.
Set this flag to treat VmaAllocationCreateInfo::pUserData as pointer to a
null-terminated string. Instead of copying pointer value, a local copy of the
string is made and stored in allocation's `pName`. The string is automatically
freed together with the allocation. It is also used in vmaBuildStatsString().
*/
VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT = 0x00000020,
/** Allocation will be created from upper stack in a double stack pool.
This flag is only allowed for custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT flag.
*/
VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = 0x00000040,
/** Create both buffer/image and allocation, but don't bind them together.
It is useful when you want to bind yourself to do some more advanced binding, e.g. using some extensions.
The flag is meaningful only with functions that bind by default: vmaCreateBuffer(), vmaCreateImage().
Otherwise it is ignored.
If you want to make sure the new buffer/image is not tied to the new memory allocation
through `VkMemoryDedicatedAllocateInfoKHR` structure in case the allocation ends up in its own memory block,
use also flag #VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT.
*/
VMA_ALLOCATION_CREATE_DONT_BIND_BIT = 0x00000080,
/** Create allocation only if additional device memory required for it, if any, won't exceed
memory budget. Otherwise return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
*/
VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT = 0x00000100,
/** \brief Set this flag if the allocated memory will have aliasing resources.
Usage of this flag prevents supplying `VkMemoryDedicatedAllocateInfoKHR` when #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT is specified.
Otherwise created dedicated memory will not be suitable for aliasing resources, resulting in Vulkan Validation Layer errors.
*/
VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT = 0x00000200,
/**
Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
- If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
- If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
This includes allocations created in \ref custom_memory_pools.
Declares that mapped memory will only be written sequentially, e.g. using `memcpy()` or a loop writing number-by-number,
never read or accessed randomly, so a memory type can be selected that is uncached and write-combined.
\warning Violating this declaration may work correctly, but will likely be very slow.
Watch out for implicit reads introduced by doing e.g. `pMappedData[i] += x;`
Better prepare your data in a local variable and `memcpy()` it to the mapped pointer all at once.
*/
VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT = 0x00000400,
/**
Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
- If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
- If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
This includes allocations created in \ref custom_memory_pools.
Declares that mapped memory can be read, written, and accessed in random order,
so a `HOST_CACHED` memory type is required.
*/
VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT = 0x00000800,
/**
Together with #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
it says that despite request for host access, a not-`HOST_VISIBLE` memory type can be selected
if it may improve performance.
By using this flag, you declare that you will check if the allocation ended up in a `HOST_VISIBLE` memory type
(e.g. using vmaGetAllocationMemoryProperties()) and if not, you will create some "staging" buffer and
issue an explicit transfer to write/read your data.
To prepare for this possibility, don't forget to add appropriate flags like
`VK_BUFFER_USAGE_TRANSFER_DST_BIT`, `VK_BUFFER_USAGE_TRANSFER_SRC_BIT` to the parameters of created buffer or image.
*/
VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT = 0x00001000,
/** Allocation strategy that chooses smallest possible free range for the allocation
to minimize memory usage and fragmentation, possibly at the expense of allocation time.
*/
VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = 0x00010000,
/** Allocation strategy that chooses first suitable free range for the allocation -
not necessarily in terms of the smallest offset but the one that is easiest and fastest to find
to minimize allocation time, possibly at the expense of allocation quality.
*/
VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = 0x00020000,
/** Allocation strategy that chooses always the lowest offset in available space.
This is not the most efficient strategy but achieves highly packed data.
Used internally by defragmentation, not recommended in typical usage.
*/
VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = 0x00040000,
/** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT.
*/
VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
/** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT.
*/
VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
/** A bit mask to extract only `STRATEGY` bits from entire set of flags.
*/
VMA_ALLOCATION_CREATE_STRATEGY_MASK =
VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT |
VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT |
VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaAllocationCreateFlagBits;
/// See #VmaAllocationCreateFlagBits.
typedef VkFlags VmaAllocationCreateFlags;
/// Flags to be passed as VmaPoolCreateInfo::flags.
typedef enum VmaPoolCreateFlagBits
{
/** \brief Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be ignored.
This is an optional optimization flag.
If you always allocate using vmaCreateBuffer(), vmaCreateImage(),
vmaAllocateMemoryForBuffer(), then you don't need to use it because allocator
knows exact type of your allocations so it can handle Buffer-Image Granularity
in the optimal way.
If you also allocate using vmaAllocateMemoryForImage() or vmaAllocateMemory(),
exact type of such allocations is not known, so allocator must be conservative
in handling Buffer-Image Granularity, which can lead to suboptimal allocation
(wasted memory). In that case, if you can make sure you always allocate only
buffers and linear images or only optimal images out of this pool, use this flag
to make allocator disregard Buffer-Image Granularity and so make allocations
faster and more optimal.
*/
VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x00000002,
/** \brief Enables alternative, linear allocation algorithm in this pool.
Specify this flag to enable linear allocation algorithm, which always creates
new allocations after last one and doesn't reuse space from allocations freed in
between. It trades memory consumption for simplified algorithm and data
structure, which has better performance and uses less memory for metadata.
By using this flag, you can achieve behavior of free-at-once, stack,
ring buffer, and double stack.
For details, see documentation chapter \ref linear_algorithm.
*/
VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT = 0x00000004,
/** Bit mask to extract only `ALGORITHM` bits from entire set of flags.
*/
VMA_POOL_CREATE_ALGORITHM_MASK =
VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT,
VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaPoolCreateFlagBits;
/// Flags to be passed as VmaPoolCreateInfo::flags. See #VmaPoolCreateFlagBits.
typedef VkFlags VmaPoolCreateFlags;
/// Flags to be passed as VmaDefragmentationInfo::flags.
typedef enum VmaDefragmentationFlagBits
{
/* \brief Use simple but fast algorithm for defragmentation.
May not achieve best results but will require least time to compute and least allocations to copy.
*/
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT = 0x1,
/* \brief Default defragmentation algorithm, applied also when no `ALGORITHM` flag is specified.
Offers a balance between defragmentation quality and the amount of allocations and bytes that need to be moved.
*/
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT = 0x2,
/* \brief Perform full defragmentation of memory.
Can result in notably more time to compute and allocations to copy, but will achieve best memory packing.
*/
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT = 0x4,
/** \brief Use the most roboust algorithm at the cost of time to compute and number of copies to make.
Only available when bufferImageGranularity is greater than 1, since it aims to reduce
alignment issues between different types of resources.
Otherwise falls back to same behavior as #VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT.
*/
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT = 0x8,
/// A bit mask to extract only `ALGORITHM` bits from entire set of flags.
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK =
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT |
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT |
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT |
VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT,
VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaDefragmentationFlagBits;
/// See #VmaDefragmentationFlagBits.
typedef VkFlags VmaDefragmentationFlags;
/// Operation performed on single defragmentation move. See structure #VmaDefragmentationMove.
typedef enum VmaDefragmentationMoveOperation
{
/// Buffer/image has been recreated at `dstTmpAllocation`, data has been copied, old buffer/image has been destroyed. `srcAllocation` should be changed to point to the new place. This is the default value set by vmaBeginDefragmentationPass().
VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY = 0,
/// Set this value if you cannot move the allocation. New place reserved at `dstTmpAllocation` will be freed. `srcAllocation` will remain unchanged.
VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE = 1,
/// Set this value if you decide to abandon the allocation and you destroyed the buffer/image. New place reserved at `dstTmpAllocation` will be freed, along with `srcAllocation`, which will be destroyed.
VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY = 2,
} VmaDefragmentationMoveOperation;
/** @} */
/**
\addtogroup group_virtual
@{
*/
/// Flags to be passed as VmaVirtualBlockCreateInfo::flags.
typedef enum VmaVirtualBlockCreateFlagBits
{
/** \brief Enables alternative, linear allocation algorithm in this virtual block.
Specify this flag to enable linear allocation algorithm, which always creates
new allocations after last one and doesn't reuse space from allocations freed in
between. It trades memory consumption for simplified algorithm and data
structure, which has better performance and uses less memory for metadata.
By using this flag, you can achieve behavior of free-at-once, stack,
ring buffer, and double stack.
For details, see documentation chapter \ref linear_algorithm.
*/
VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT = 0x00000001,
/** \brief Bit mask to extract only `ALGORITHM` bits from entire set of flags.
*/
VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK =
VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT,
VMA_VIRTUAL_BLOCK_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaVirtualBlockCreateFlagBits;
/// Flags to be passed as VmaVirtualBlockCreateInfo::flags. See #VmaVirtualBlockCreateFlagBits.
typedef VkFlags VmaVirtualBlockCreateFlags;
/// Flags to be passed as VmaVirtualAllocationCreateInfo::flags.
typedef enum VmaVirtualAllocationCreateFlagBits
{
/** \brief Allocation will be created from upper stack in a double stack pool.
This flag is only allowed for virtual blocks created with #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT flag.
*/
VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
/** \brief Allocation strategy that tries to minimize memory usage.
*/
VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
/** \brief Allocation strategy that tries to minimize allocation time.
*/
VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
/** Allocation strategy that chooses always the lowest offset in available space.
This is not the most efficient strategy but achieves highly packed data.
*/
VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
/** \brief A bit mask to extract only `STRATEGY` bits from entire set of flags.
These strategy flags are binary compatible with equivalent flags in #VmaAllocationCreateFlagBits.
*/
VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK = VMA_ALLOCATION_CREATE_STRATEGY_MASK,
VMA_VIRTUAL_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
} VmaVirtualAllocationCreateFlagBits;
/// Flags to be passed as VmaVirtualAllocationCreateInfo::flags. See #VmaVirtualAllocationCreateFlagBits.
typedef VkFlags VmaVirtualAllocationCreateFlags;
/** @} */
#endif // _VMA_ENUM_DECLARATIONS
#ifndef _VMA_DATA_TYPES_DECLARATIONS
/**
\addtogroup group_init
@{ */
/** \struct VmaAllocator
\brief Represents main object of this library initialized.
Fill structure #VmaAllocatorCreateInfo and call function vmaCreateAllocator() to create it.
Call function vmaDestroyAllocator() to destroy it.
It is recommended to create just one object of this type per `VkDevice` object,
right after Vulkan is initialized and keep it alive until before Vulkan device is destroyed.
*/
VK_DEFINE_HANDLE(VmaAllocator)
/** @} */
/**
\addtogroup group_alloc
@{
*/
/** \struct VmaPool
\brief Represents custom memory pool
Fill structure VmaPoolCreateInfo and call function vmaCreatePool() to create it.
Call function vmaDestroyPool() to destroy it.
For more information see [Custom memory pools](@ref choosing_memory_type_custom_memory_pools).
*/
VK_DEFINE_HANDLE(VmaPool)
/** \struct VmaAllocation
\brief Represents single memory allocation.
It may be either dedicated block of `VkDeviceMemory` or a specific region of a bigger block of this type
plus unique offset.
There are multiple ways to create such object.
You need to fill structure VmaAllocationCreateInfo.
For more information see [Choosing memory type](@ref choosing_memory_type).
Although the library provides convenience functions that create Vulkan buffer or image,
allocate memory for it and bind them together,
binding of the allocation to a buffer or an image is out of scope of the allocation itself.
Allocation object can exist without buffer/image bound,
binding can be done manually by the user, and destruction of it can be done
independently of destruction of the allocation.
The object also remembers its size and some other information.
To retrieve this information, use function vmaGetAllocationInfo() and inspect
returned structure VmaAllocationInfo.
*/
VK_DEFINE_HANDLE(VmaAllocation)
/** \struct VmaDefragmentationContext
\brief An opaque object that represents started defragmentation process.
Fill structure #VmaDefragmentationInfo and call function vmaBeginDefragmentation() to create it.
Call function vmaEndDefragmentation() to destroy it.
*/
VK_DEFINE_HANDLE(VmaDefragmentationContext)
/** @} */
/**
\addtogroup group_virtual
@{
*/
/** \struct VmaVirtualAllocation
\brief Represents single memory allocation done inside VmaVirtualBlock.
Use it as a unique identifier to virtual allocation within the single block.
Use value `VK_NULL_HANDLE` to represent a null/invalid allocation.
*/
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaVirtualAllocation)
/** @} */
/**
\addtogroup group_virtual
@{
*/
/** \struct VmaVirtualBlock
\brief Handle to a virtual block object that allows to use core allocation algorithm without allocating any real GPU memory.
Fill in #VmaVirtualBlockCreateInfo structure and use vmaCreateVirtualBlock() to create it. Use vmaDestroyVirtualBlock() to destroy it.
For more information, see documentation chapter \ref virtual_allocator.
This object is not thread-safe - should not be used from multiple threads simultaneously, must be synchronized externally.
*/
VK_DEFINE_HANDLE(VmaVirtualBlock)
/** @} */
/**
\addtogroup group_init
@{
*/
/// Callback function called after successful vkAllocateMemory.
typedef void (VKAPI_PTR* PFN_vmaAllocateDeviceMemoryFunction)(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t memoryType,
VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
VkDeviceSize size,
void* VMA_NULLABLE pUserData);
/// Callback function called before vkFreeMemory.
typedef void (VKAPI_PTR* PFN_vmaFreeDeviceMemoryFunction)(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t memoryType,
VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
VkDeviceSize size,
void* VMA_NULLABLE pUserData);
/** \brief Set of callbacks that the library will call for `vkAllocateMemory` and `vkFreeMemory`.
Provided for informative purpose, e.g. to gather statistics about number of
allocations or total amount of memory allocated in Vulkan.
Used in VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
*/
typedef struct VmaDeviceMemoryCallbacks
{
/// Optional, can be null.
PFN_vmaAllocateDeviceMemoryFunction VMA_NULLABLE pfnAllocate;
/// Optional, can be null.
PFN_vmaFreeDeviceMemoryFunction VMA_NULLABLE pfnFree;
/// Optional, can be null.
void* VMA_NULLABLE pUserData;
} VmaDeviceMemoryCallbacks;
/** \brief Pointers to some Vulkan functions - a subset used by the library.
Used in VmaAllocatorCreateInfo::pVulkanFunctions.
*/
typedef struct VmaVulkanFunctions
{
/// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
PFN_vkGetInstanceProcAddr VMA_NULLABLE vkGetInstanceProcAddr;
/// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
PFN_vkGetDeviceProcAddr VMA_NULLABLE vkGetDeviceProcAddr;
PFN_vkGetPhysicalDeviceProperties VMA_NULLABLE vkGetPhysicalDeviceProperties;
PFN_vkGetPhysicalDeviceMemoryProperties VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties;
PFN_vkAllocateMemory VMA_NULLABLE vkAllocateMemory;
PFN_vkFreeMemory VMA_NULLABLE vkFreeMemory;
PFN_vkMapMemory VMA_NULLABLE vkMapMemory;
PFN_vkUnmapMemory VMA_NULLABLE vkUnmapMemory;
PFN_vkFlushMappedMemoryRanges VMA_NULLABLE vkFlushMappedMemoryRanges;
PFN_vkInvalidateMappedMemoryRanges VMA_NULLABLE vkInvalidateMappedMemoryRanges;
PFN_vkBindBufferMemory VMA_NULLABLE vkBindBufferMemory;
PFN_vkBindImageMemory VMA_NULLABLE vkBindImageMemory;
PFN_vkGetBufferMemoryRequirements VMA_NULLABLE vkGetBufferMemoryRequirements;
PFN_vkGetImageMemoryRequirements VMA_NULLABLE vkGetImageMemoryRequirements;
PFN_vkCreateBuffer VMA_NULLABLE vkCreateBuffer;
PFN_vkDestroyBuffer VMA_NULLABLE vkDestroyBuffer;
PFN_vkCreateImage VMA_NULLABLE vkCreateImage;
PFN_vkDestroyImage VMA_NULLABLE vkDestroyImage;
PFN_vkCmdCopyBuffer VMA_NULLABLE vkCmdCopyBuffer;
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
/// Fetch "vkGetBufferMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetBufferMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
/// Fetch "vkGetImageMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetImageMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
#endif
#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
/// Fetch "vkBindBufferMemory2" on Vulkan >= 1.1, fetch "vkBindBufferMemory2KHR" when using VK_KHR_bind_memory2 extension.
PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
/// Fetch "vkBindImageMemory2" on Vulkan >= 1.1, fetch "vkBindImageMemory2KHR" when using VK_KHR_bind_memory2 extension.
PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
#endif
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
#endif
#if VMA_VULKAN_VERSION >= 1003000
/// Fetch from "vkGetDeviceBufferMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceBufferMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
PFN_vkGetDeviceBufferMemoryRequirements VMA_NULLABLE vkGetDeviceBufferMemoryRequirements;
/// Fetch from "vkGetDeviceImageMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceImageMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
PFN_vkGetDeviceImageMemoryRequirements VMA_NULLABLE vkGetDeviceImageMemoryRequirements;
#endif
} VmaVulkanFunctions;
/// Description of a Allocator to be created.
typedef struct VmaAllocatorCreateInfo
{
/// Flags for created allocator. Use #VmaAllocatorCreateFlagBits enum.
VmaAllocatorCreateFlags flags;
/// Vulkan physical device.
/** It must be valid throughout whole lifetime of created allocator. */
VkPhysicalDevice VMA_NOT_NULL physicalDevice;
/// Vulkan device.
/** It must be valid throughout whole lifetime of created allocator. */
VkDevice VMA_NOT_NULL device;
/// Preferred size of a single `VkDeviceMemory` block to be allocated from large heaps > 1 GiB. Optional.
/** Set to 0 to use default, which is currently 256 MiB. */
VkDeviceSize preferredLargeHeapBlockSize;
/// Custom CPU memory allocation callbacks. Optional.
/** Optional, can be null. When specified, will also be used for all CPU-side memory allocations. */
const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
/// Informative callbacks for `vkAllocateMemory`, `vkFreeMemory`. Optional.
/** Optional, can be null. */
const VmaDeviceMemoryCallbacks* VMA_NULLABLE pDeviceMemoryCallbacks;
/** \brief Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out of particular Vulkan memory heap.
If not NULL, it must be a pointer to an array of
`VkPhysicalDeviceMemoryProperties::memoryHeapCount` elements, defining limit on
maximum number of bytes that can be allocated out of particular Vulkan memory
heap.
Any of the elements may be equal to `VK_WHOLE_SIZE`, which means no limit on that
heap. This is also the default in case of `pHeapSizeLimit` = NULL.
If there is a limit defined for a heap:
- If user tries to allocate more memory from that heap using this allocator,
the allocation fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
- If the limit is smaller than heap size reported in `VkMemoryHeap::size`, the
value of this limit will be reported instead when using vmaGetMemoryProperties().
Warning! Using this feature may not be equivalent to installing a GPU with
smaller amount of memory, because graphics driver doesn't necessary fail new
allocations with `VK_ERROR_OUT_OF_DEVICE_MEMORY` result when memory capacity is
exceeded. It may return success and just silently migrate some device memory
blocks to system RAM. This driver behavior can also be controlled using
VK_AMD_memory_overallocation_behavior extension.
*/
const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pHeapSizeLimit;
/** \brief Pointers to Vulkan functions. Can be null.
For details see [Pointers to Vulkan functions](@ref config_Vulkan_functions).
*/
const VmaVulkanFunctions* VMA_NULLABLE pVulkanFunctions;
/** \brief Handle to Vulkan instance object.
Starting from version 3.0.0 this member is no longer optional, it must be set!
*/
VkInstance VMA_NOT_NULL instance;
/** \brief Optional. The highest version of Vulkan that the application is designed to use.
It must be a value in the format as created by macro `VK_MAKE_VERSION` or a constant like: `VK_API_VERSION_1_1`, `VK_API_VERSION_1_0`.
The patch version number specified is ignored. Only the major and minor versions are considered.
It must be less or equal (preferably equal) to value as passed to `vkCreateInstance` as `VkApplicationInfo::apiVersion`.
Only versions 1.0, 1.1, 1.2, 1.3 are supported by the current implementation.
Leaving it initialized to zero is equivalent to `VK_API_VERSION_1_0`.
*/
uint32_t vulkanApiVersion;
#if VMA_EXTERNAL_MEMORY
/** \brief Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
If not NULL, it must be a pointer to an array of `VkPhysicalDeviceMemoryProperties::memoryTypeCount`
elements, defining external memory handle types of particular Vulkan memory type,
to be passed using `VkExportMemoryAllocateInfoKHR`.
Any of the elements may be equal to 0, which means not to use `VkExportMemoryAllocateInfoKHR` on this memory type.
This is also the default in case of `pTypeExternalMemoryHandleTypes` = NULL.
*/
const VkExternalMemoryHandleTypeFlagsKHR* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryTypeCount") pTypeExternalMemoryHandleTypes;
#endif // #if VMA_EXTERNAL_MEMORY
} VmaAllocatorCreateInfo;
/// Information about existing #VmaAllocator object.
typedef struct VmaAllocatorInfo
{
/** \brief Handle to Vulkan instance object.
This is the same value as has been passed through VmaAllocatorCreateInfo::instance.
*/
VkInstance VMA_NOT_NULL instance;
/** \brief Handle to Vulkan physical device object.
This is the same value as has been passed through VmaAllocatorCreateInfo::physicalDevice.
*/
VkPhysicalDevice VMA_NOT_NULL physicalDevice;
/** \brief Handle to Vulkan device object.
This is the same value as has been passed through VmaAllocatorCreateInfo::device.
*/
VkDevice VMA_NOT_NULL device;
} VmaAllocatorInfo;
/** @} */
/**
\addtogroup group_stats
@{
*/
/** \brief Calculated statistics of memory usage e.g. in a specific memory type, heap, custom pool, or total.
These are fast to calculate.
See functions: vmaGetHeapBudgets(), vmaGetPoolStatistics().
*/
typedef struct VmaStatistics
{
/** \brief Number of `VkDeviceMemory` objects - Vulkan memory blocks allocated.
*/
uint32_t blockCount;
/** \brief Number of #VmaAllocation objects allocated.
Dedicated allocations have their own blocks, so each one adds 1 to `allocationCount` as well as `blockCount`.
*/
uint32_t allocationCount;
/** \brief Number of bytes allocated in `VkDeviceMemory` blocks.
\note To avoid confusion, please be aware that what Vulkan calls an "allocation" - a whole `VkDeviceMemory` object
(e.g. as in `VkPhysicalDeviceLimits::maxMemoryAllocationCount`) is called a "block" in VMA, while VMA calls
"allocation" a #VmaAllocation object that represents a memory region sub-allocated from such block, usually for a single buffer or image.
*/
VkDeviceSize blockBytes;
/** \brief Total number of bytes occupied by all #VmaAllocation objects.
Always less or equal than `blockBytes`.
Difference `(blockBytes - allocationBytes)` is the amount of memory allocated from Vulkan
but unused by any #VmaAllocation.
*/
VkDeviceSize allocationBytes;
} VmaStatistics;
/** \brief More detailed statistics than #VmaStatistics.
These are slower to calculate. Use for debugging purposes.
See functions: vmaCalculateStatistics(), vmaCalculatePoolStatistics().
Previous version of the statistics API provided averages, but they have been removed
because they can be easily calculated as:
\code
VkDeviceSize allocationSizeAvg = detailedStats.statistics.allocationBytes / detailedStats.statistics.allocationCount;
VkDeviceSize unusedBytes = detailedStats.statistics.blockBytes - detailedStats.statistics.allocationBytes;
VkDeviceSize unusedRangeSizeAvg = unusedBytes / detailedStats.unusedRangeCount;
\endcode
*/
typedef struct VmaDetailedStatistics
{
/// Basic statistics.
VmaStatistics statistics;
/// Number of free ranges of memory between allocations.
uint32_t unusedRangeCount;
/// Smallest allocation size. `VK_WHOLE_SIZE` if there are 0 allocations.
VkDeviceSize allocationSizeMin;
/// Largest allocation size. 0 if there are 0 allocations.
VkDeviceSize allocationSizeMax;
/// Smallest empty range size. `VK_WHOLE_SIZE` if there are 0 empty ranges.
VkDeviceSize unusedRangeSizeMin;
/// Largest empty range size. 0 if there are 0 empty ranges.
VkDeviceSize unusedRangeSizeMax;
} VmaDetailedStatistics;
/** \brief General statistics from current state of the Allocator -
total memory usage across all memory heaps and types.
These are slower to calculate. Use for debugging purposes.
See function vmaCalculateStatistics().
*/
typedef struct VmaTotalStatistics
{
VmaDetailedStatistics memoryType[VK_MAX_MEMORY_TYPES];
VmaDetailedStatistics memoryHeap[VK_MAX_MEMORY_HEAPS];
VmaDetailedStatistics total;
} VmaTotalStatistics;
/** \brief Statistics of current memory usage and available budget for a specific memory heap.
These are fast to calculate.
See function vmaGetHeapBudgets().
*/
typedef struct VmaBudget
{
/** \brief Statistics fetched from the library.
*/
VmaStatistics statistics;
/** \brief Estimated current memory usage of the program, in bytes.
Fetched from system using VK_EXT_memory_budget extension if enabled.
It might be different than `statistics.blockBytes` (usually higher) due to additional implicit objects
also occupying the memory, like swapchain, pipelines, descriptor heaps, command buffers, or
`VkDeviceMemory` blocks allocated outside of this library, if any.
*/
VkDeviceSize usage;
/** \brief Estimated amount of memory available to the program, in bytes.
Fetched from system using VK_EXT_memory_budget extension if enabled.
It might be different (most probably smaller) than `VkMemoryHeap::size[heapIndex]` due to factors
external to the program, decided by the operating system.
Difference `budget - usage` is the amount of additional memory that can probably
be allocated without problems. Exceeding the budget may result in various problems.
*/
VkDeviceSize budget;
} VmaBudget;
/** @} */
/**
\addtogroup group_alloc
@{
*/
/** \brief Parameters of new #VmaAllocation.
To be used with functions like vmaCreateBuffer(), vmaCreateImage(), and many others.
*/
typedef struct VmaAllocationCreateInfo
{
/// Use #VmaAllocationCreateFlagBits enum.
VmaAllocationCreateFlags flags;
/** \brief Intended usage of memory.
You can leave #VMA_MEMORY_USAGE_UNKNOWN if you specify memory requirements in other way. \n
If `pool` is not null, this member is ignored.
*/
VmaMemoryUsage usage;
/** \brief Flags that must be set in a Memory Type chosen for an allocation.
Leave 0 if you specify memory requirements in other way. \n
If `pool` is not null, this member is ignored.*/
VkMemoryPropertyFlags requiredFlags;
/** \brief Flags that preferably should be set in a memory type chosen for an allocation.
Set to 0 if no additional flags are preferred. \n
If `pool` is not null, this member is ignored. */
VkMemoryPropertyFlags preferredFlags;
/** \brief Bitmask containing one bit set for every memory type acceptable for this allocation.
Value 0 is equivalent to `UINT32_MAX` - it means any memory type is accepted if
it meets other requirements specified by this structure, with no further
restrictions on memory type index. \n
If `pool` is not null, this member is ignored.
*/
uint32_t memoryTypeBits;
/** \brief Pool that this allocation should be created in.
Leave `VK_NULL_HANDLE` to allocate from default pool. If not null, members:
`usage`, `requiredFlags`, `preferredFlags`, `memoryTypeBits` are ignored.
*/
VmaPool VMA_NULLABLE pool;
/** \brief Custom general-purpose pointer that will be stored in #VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData().
If #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT is used, it must be either
null or pointer to a null-terminated string. The string will be then copied to
internal buffer, so it doesn't need to be valid after allocation call.
*/
void* VMA_NULLABLE pUserData;
/** \brief A floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object
and this allocation ends up as dedicated or is explicitly forced as dedicated using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
Otherwise, it has the priority of a memory block where it is placed and this variable is ignored.
*/
float priority;
} VmaAllocationCreateInfo;
/// Describes parameter of created #VmaPool.
typedef struct VmaPoolCreateInfo
{
/** \brief Vulkan memory type index to allocate this pool from.
*/
uint32_t memoryTypeIndex;
/** \brief Use combination of #VmaPoolCreateFlagBits.
*/
VmaPoolCreateFlags flags;
/** \brief Size of a single `VkDeviceMemory` block to be allocated as part of this pool, in bytes. Optional.
Specify nonzero to set explicit, constant size of memory blocks used by this
pool.
Leave 0 to use default and let the library manage block sizes automatically.
Sizes of particular blocks may vary.
In this case, the pool will also support dedicated allocations.
*/
VkDeviceSize blockSize;
/** \brief Minimum number of blocks to be always allocated in this pool, even if they stay empty.
Set to 0 to have no preallocated blocks and allow the pool be completely empty.
*/
size_t minBlockCount;
/** \brief Maximum number of blocks that can be allocated in this pool. Optional.
Set to 0 to use default, which is `SIZE_MAX`, which means no limit.
Set to same value as VmaPoolCreateInfo::minBlockCount to have fixed amount of memory allocated
throughout whole lifetime of this pool.
*/
size_t maxBlockCount;
/** \brief A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relative to other memory allocations.
It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object.
Otherwise, this variable is ignored.
*/
float priority;
/** \brief Additional minimum alignment to be used for all allocations created from this pool. Can be 0.
Leave 0 (default) not to impose any additional alignment. If not 0, it must be a power of two.
It can be useful in cases where alignment returned by Vulkan by functions like `vkGetBufferMemoryRequirements` is not enough,
e.g. when doing interop with OpenGL.
*/
VkDeviceSize minAllocationAlignment;
/** \brief Additional `pNext` chain to be attached to `VkMemoryAllocateInfo` used for every allocation made by this pool. Optional.
Optional, can be null. If not null, it must point to a `pNext` chain of structures that can be attached to `VkMemoryAllocateInfo`.
It can be useful for special needs such as adding `VkExportMemoryAllocateInfoKHR`.
Structures pointed by this member must remain alive and unchanged for the whole lifetime of the custom pool.
Please note that some structures, e.g. `VkMemoryPriorityAllocateInfoEXT`, `VkMemoryDedicatedAllocateInfoKHR`,
can be attached automatically by this library when using other, more convenient of its features.
*/
void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkMemoryAllocateInfo) pMemoryAllocateNext;
} VmaPoolCreateInfo;
/** @} */
/**
\addtogroup group_alloc
@{
*/
/// Parameters of #VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
typedef struct VmaAllocationInfo
{
/** \brief Memory type index that this allocation was allocated from.
It never changes.
*/
uint32_t memoryType;
/** \brief Handle to Vulkan memory object.
Same memory object can be shared by multiple allocations.
It can change after the allocation is moved during \ref defragmentation.
*/
VkDeviceMemory VMA_NULLABLE_NON_DISPATCHABLE deviceMemory;
/** \brief Offset in `VkDeviceMemory` object to the beginning of this allocation, in bytes. `(deviceMemory, offset)` pair is unique to this allocation.
You usually don't need to use this offset. If you create a buffer or an image together with the allocation using e.g. function
vmaCreateBuffer(), vmaCreateImage(), functions that operate on these resources refer to the beginning of the buffer or image,
not entire device memory block. Functions like vmaMapMemory(), vmaBindBufferMemory() also refer to the beginning of the allocation
and apply this offset automatically.
It can change after the allocation is moved during \ref defragmentation.
*/
VkDeviceSize offset;
/** \brief Size of this allocation, in bytes.
It never changes.
\note Allocation size returned in this variable may be greater than the size
requested for the resource e.g. as `VkBufferCreateInfo::size`. Whole size of the
allocation is accessible for operations on memory e.g. using a pointer after
mapping with vmaMapMemory(), but operations on the resource e.g. using
`vkCmdCopyBuffer` must be limited to the size of the resource.
*/
VkDeviceSize size;
/** \brief Pointer to the beginning of this allocation as mapped data.
If the allocation hasn't been mapped using vmaMapMemory() and hasn't been
created with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag, this value is null.
It can change after call to vmaMapMemory(), vmaUnmapMemory().
It can also change after the allocation is moved during \ref defragmentation.
*/
void* VMA_NULLABLE pMappedData;
/** \brief Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vmaSetAllocationUserData().
It can change after call to vmaSetAllocationUserData() for this allocation.
*/
void* VMA_NULLABLE pUserData;
/** \brief Custom allocation name that was set with vmaSetAllocationName().
It can change after call to vmaSetAllocationName() for this allocation.
Another way to set custom name is to pass it in VmaAllocationCreateInfo::pUserData with
additional flag #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT set [DEPRECATED].
*/
const char* VMA_NULLABLE pName;
} VmaAllocationInfo;
/** Callback function called during vmaBeginDefragmentation() to check custom criterion about ending current defragmentation pass.
Should return true if the defragmentation needs to stop current pass.
*/
typedef VkBool32 (VKAPI_PTR* PFN_vmaCheckDefragmentationBreakFunction)(void* VMA_NULLABLE pUserData);
/** \brief Parameters for defragmentation.
To be used with function vmaBeginDefragmentation().
*/
typedef struct VmaDefragmentationInfo
{
/// \brief Use combination of #VmaDefragmentationFlagBits.
VmaDefragmentationFlags flags;
/** \brief Custom pool to be defragmented.
If null then default pools will undergo defragmentation process.
*/
VmaPool VMA_NULLABLE pool;
/** \brief Maximum numbers of bytes that can be copied during single pass, while moving allocations to different places.
`0` means no limit.
*/
VkDeviceSize maxBytesPerPass;
/** \brief Maximum number of allocations that can be moved during single pass to a different place.
`0` means no limit.
*/
uint32_t maxAllocationsPerPass;
/** \brief Optional custom callback for stopping vmaBeginDefragmentation().
Have to return true for breaking current defragmentation pass.
*/
PFN_vmaCheckDefragmentationBreakFunction VMA_NULLABLE pfnBreakCallback;
/// \brief Optional data to pass to custom callback for stopping pass of defragmentation.
void* VMA_NULLABLE pBreakCallbackUserData;
} VmaDefragmentationInfo;
/// Single move of an allocation to be done for defragmentation.
typedef struct VmaDefragmentationMove
{
/// Operation to be performed on the allocation by vmaEndDefragmentationPass(). Default value is #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY. You can modify it.
VmaDefragmentationMoveOperation operation;
/// Allocation that should be moved.
VmaAllocation VMA_NOT_NULL srcAllocation;
/** \brief Temporary allocation pointing to destination memory that will replace `srcAllocation`.
\warning Do not store this allocation in your data structures! It exists only temporarily, for the duration of the defragmentation pass,
to be used for binding new buffer/image to the destination memory using e.g. vmaBindBufferMemory().
vmaEndDefragmentationPass() will destroy it and make `srcAllocation` point to this memory.
*/
VmaAllocation VMA_NOT_NULL dstTmpAllocation;
} VmaDefragmentationMove;
/** \brief Parameters for incremental defragmentation steps.
To be used with function vmaBeginDefragmentationPass().
*/
typedef struct VmaDefragmentationPassMoveInfo
{
/// Number of elements in the `pMoves` array.
uint32_t moveCount;
/** \brief Array of moves to be performed by the user in the current defragmentation pass.
Pointer to an array of `moveCount` elements, owned by VMA, created in vmaBeginDefragmentationPass(), destroyed in vmaEndDefragmentationPass().
For each element, you should:
1. Create a new buffer/image in the place pointed by VmaDefragmentationMove::dstMemory + VmaDefragmentationMove::dstOffset.
2. Copy data from the VmaDefragmentationMove::srcAllocation e.g. using `vkCmdCopyBuffer`, `vkCmdCopyImage`.
3. Make sure these commands finished executing on the GPU.
4. Destroy the old buffer/image.
Only then you can finish defragmentation pass by calling vmaEndDefragmentationPass().
After this call, the allocation will point to the new place in memory.
Alternatively, if you cannot move specific allocation, you can set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
Alternatively, if you decide you want to completely remove the allocation:
1. Destroy its buffer/image.
2. Set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
Then, after vmaEndDefragmentationPass() the allocation will be freed.
*/
VmaDefragmentationMove* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(moveCount) pMoves;
} VmaDefragmentationPassMoveInfo;
/// Statistics returned for defragmentation process in function vmaEndDefragmentation().
typedef struct VmaDefragmentationStats
{
/// Total number of bytes that have been copied while moving allocations to different places.
VkDeviceSize bytesMoved;
/// Total number of bytes that have been released to the system by freeing empty `VkDeviceMemory` objects.
VkDeviceSize bytesFreed;
/// Number of allocations that have been moved to different places.
uint32_t allocationsMoved;
/// Number of empty `VkDeviceMemory` objects that have been released to the system.
uint32_t deviceMemoryBlocksFreed;
} VmaDefragmentationStats;
/** @} */
/**
\addtogroup group_virtual
@{
*/
/// Parameters of created #VmaVirtualBlock object to be passed to vmaCreateVirtualBlock().
typedef struct VmaVirtualBlockCreateInfo
{
/** \brief Total size of the virtual block.
Sizes can be expressed in bytes or any units you want as long as you are consistent in using them.
For example, if you allocate from some array of structures, 1 can mean single instance of entire structure.
*/
VkDeviceSize size;
/** \brief Use combination of #VmaVirtualBlockCreateFlagBits.
*/
VmaVirtualBlockCreateFlags flags;
/** \brief Custom CPU memory allocation callbacks. Optional.
Optional, can be null. When specified, they will be used for all CPU-side memory allocations.
*/
const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
} VmaVirtualBlockCreateInfo;
/// Parameters of created virtual allocation to be passed to vmaVirtualAllocate().
typedef struct VmaVirtualAllocationCreateInfo
{
/** \brief Size of the allocation.
Cannot be zero.
*/
VkDeviceSize size;
/** \brief Required alignment of the allocation. Optional.
Must be power of two. Special value 0 has the same meaning as 1 - means no special alignment is required, so allocation can start at any offset.
*/
VkDeviceSize alignment;
/** \brief Use combination of #VmaVirtualAllocationCreateFlagBits.
*/
VmaVirtualAllocationCreateFlags flags;
/** \brief Custom pointer to be associated with the allocation. Optional.
It can be any value and can be used for user-defined purposes. It can be fetched or changed later.
*/
void* VMA_NULLABLE pUserData;
} VmaVirtualAllocationCreateInfo;
/// Parameters of an existing virtual allocation, returned by vmaGetVirtualAllocationInfo().
typedef struct VmaVirtualAllocationInfo
{
/** \brief Offset of the allocation.
Offset at which the allocation was made.
*/
VkDeviceSize offset;
/** \brief Size of the allocation.
Same value as passed in VmaVirtualAllocationCreateInfo::size.
*/
VkDeviceSize size;
/** \brief Custom pointer associated with the allocation.
Same value as passed in VmaVirtualAllocationCreateInfo::pUserData or to vmaSetVirtualAllocationUserData().
*/
void* VMA_NULLABLE pUserData;
} VmaVirtualAllocationInfo;
/** @} */
#endif // _VMA_DATA_TYPES_DECLARATIONS
#ifndef _VMA_FUNCTION_HEADERS
/**
\addtogroup group_init
@{
*/
/// Creates #VmaAllocator object.
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
const VmaAllocatorCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaAllocator VMA_NULLABLE* VMA_NOT_NULL pAllocator);
/// Destroys allocator object.
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
VmaAllocator VMA_NULLABLE allocator);
/** \brief Returns information about existing #VmaAllocator object - handle to Vulkan device etc.
It might be useful if you want to keep just the #VmaAllocator handle and fetch other required handles to
`VkPhysicalDevice`, `VkDevice` etc. every time using this function.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocatorInfo* VMA_NOT_NULL pAllocatorInfo);
/**
PhysicalDeviceProperties are fetched from physicalDevice by the allocator.
You can access it here, without fetching it again on your own.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
VmaAllocator VMA_NOT_NULL allocator,
const VkPhysicalDeviceProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceProperties);
/**
PhysicalDeviceMemoryProperties are fetched from physicalDevice by the allocator.
You can access it here, without fetching it again on your own.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
VmaAllocator VMA_NOT_NULL allocator,
const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
/**
\brief Given Memory Type Index, returns Property Flags of this memory type.
This is just a convenience function. Same information can be obtained using
vmaGetMemoryProperties().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t memoryTypeIndex,
VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
/** \brief Sets index of the current frame.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t frameIndex);
/** @} */
/**
\addtogroup group_stats
@{
*/
/** \brief Retrieves statistics from current state of the Allocator.
This function is called "calculate" not "get" because it has to traverse all
internal data structures, so it may be quite slow. Use it for debugging purposes.
For faster but more brief statistics suitable to be called every frame or every allocation,
use vmaGetHeapBudgets().
Note that when using allocator from multiple threads, returned information may immediately
become outdated.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
VmaAllocator VMA_NOT_NULL allocator,
VmaTotalStatistics* VMA_NOT_NULL pStats);
/** \brief Retrieves information about current memory usage and budget for all memory heaps.
\param allocator
\param[out] pBudgets Must point to array with number of elements at least equal to number of memory heaps in physical device used.
This function is called "get" not "calculate" because it is very fast, suitable to be called
every frame or every allocation. For more detailed statistics use vmaCalculateStatistics().
Note that when using allocator from multiple threads, returned information may immediately
become outdated.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
VmaAllocator VMA_NOT_NULL allocator,
VmaBudget* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pBudgets);
/** @} */
/**
\addtogroup group_alloc
@{
*/
/**
\brief Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
This algorithm tries to find a memory type that:
- Is allowed by memoryTypeBits.
- Contains all the flags from pAllocationCreateInfo->requiredFlags.
- Matches intended usage.
- Has as many flags from pAllocationCreateInfo->preferredFlags as possible.
\return Returns VK_ERROR_FEATURE_NOT_PRESENT if not found. Receiving such result
from this function or any other allocating function probably means that your
device doesn't support any memory type with requested features for the specific
type of resource you want to use it for. Please check parameters of your
resource, like image layout (OPTIMAL versus LINEAR) or mip level count.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t memoryTypeBits,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
/**
\brief Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
It internally creates a temporary, dummy buffer that never has memory bound.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
VmaAllocator VMA_NOT_NULL allocator,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
/**
\brief Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
It internally creates a temporary, dummy image that never has memory bound.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
VmaAllocator VMA_NOT_NULL allocator,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
/** \brief Allocates Vulkan device memory and creates #VmaPool object.
\param allocator Allocator object.
\param pCreateInfo Parameters of pool to create.
\param[out] pPool Handle to created pool.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
VmaAllocator VMA_NOT_NULL allocator,
const VmaPoolCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaPool VMA_NULLABLE* VMA_NOT_NULL pPool);
/** \brief Destroys #VmaPool object and frees Vulkan device memory.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NULLABLE pool);
/** @} */
/**
\addtogroup group_stats
@{
*/
/** \brief Retrieves statistics of existing #VmaPool object.
\param allocator Allocator object.
\param pool Pool object.
\param[out] pPoolStats Statistics of specified pool.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NOT_NULL pool,
VmaStatistics* VMA_NOT_NULL pPoolStats);
/** \brief Retrieves detailed statistics of existing #VmaPool object.
\param allocator Allocator object.
\param pool Pool object.
\param[out] pPoolStats Statistics of specified pool.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NOT_NULL pool,
VmaDetailedStatistics* VMA_NOT_NULL pPoolStats);
/** @} */
/**
\addtogroup group_alloc
@{
*/
/** \brief Checks magic number in margins around all allocations in given memory pool in search for corruptions.
Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
`VMA_DEBUG_MARGIN` is defined to nonzero and the pool is created in memory type that is
`HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
Possible return values:
- `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for specified pool.
- `VK_SUCCESS` - corruption detection has been performed and succeeded.
- `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
`VMA_ASSERT` is also fired in that case.
- Other value: Error returned by Vulkan, e.g. memory mapping failure.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NOT_NULL pool);
/** \brief Retrieves name of a custom pool.
After the call `ppName` is either null or points to an internally-owned null-terminated string
containing name of the pool that was previously set. The pointer becomes invalid when the pool is
destroyed or its name is changed using vmaSetPoolName().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NOT_NULL pool,
const char* VMA_NULLABLE* VMA_NOT_NULL ppName);
/** \brief Sets name of a custom pool.
`pName` can be either null or pointer to a null-terminated string with new name for the pool.
Function makes internal copy of the string, so it can be changed or freed immediately after this call.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
VmaAllocator VMA_NOT_NULL allocator,
VmaPool VMA_NOT_NULL pool,
const char* VMA_NULLABLE pName);
/** \brief General purpose memory allocation.
\param allocator
\param pVkMemoryRequirements
\param pCreateInfo
\param[out] pAllocation Handle to allocated memory.
\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
It is recommended to use vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage(),
vmaCreateBuffer(), vmaCreateImage() instead whenever possible.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
VmaAllocator VMA_NOT_NULL allocator,
const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/** \brief General purpose memory allocation for multiple allocation objects at once.
\param allocator Allocator object.
\param pVkMemoryRequirements Memory requirements for each allocation.
\param pCreateInfo Creation parameters for each allocation.
\param allocationCount Number of allocations to make.
\param[out] pAllocations Pointer to array that will be filled with handles to created allocations.
\param[out] pAllocationInfo Optional. Pointer to array that will be filled with parameters of created allocations.
You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
Word "pages" is just a suggestion to use this function to allocate pieces of memory needed for sparse binding.
It is just a general purpose allocation function able to make multiple allocations at once.
It may be internally optimized to be more efficient than calling vmaAllocateMemory() `allocationCount` times.
All allocations are made using same parameters. All of them are created out of the same memory pool and type.
If any allocation fails, all allocations already made within this function call are also freed, so that when
returned result is not `VK_SUCCESS`, `pAllocation` array is always entirely filled with `VK_NULL_HANDLE`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
VmaAllocator VMA_NOT_NULL allocator,
const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
const VmaAllocationCreateInfo* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pCreateInfo,
size_t allocationCount,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
/** \brief Allocates memory suitable for given `VkBuffer`.
\param allocator
\param buffer
\param pCreateInfo
\param[out] pAllocation Handle to allocated memory.
\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindBufferMemory().
This is a special-purpose function. In most cases you should use vmaCreateBuffer().
You must free the allocation using vmaFreeMemory() when no longer needed.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
VmaAllocator VMA_NOT_NULL allocator,
VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/** \brief Allocates memory suitable for given `VkImage`.
\param allocator
\param image
\param pCreateInfo
\param[out] pAllocation Handle to allocated memory.
\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindImageMemory().
This is a special-purpose function. In most cases you should use vmaCreateImage().
You must free the allocation using vmaFreeMemory() when no longer needed.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
VmaAllocator VMA_NOT_NULL allocator,
VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/** \brief Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
Passing `VK_NULL_HANDLE` as `allocation` is valid. Such function call is just skipped.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
VmaAllocator VMA_NOT_NULL allocator,
const VmaAllocation VMA_NULLABLE allocation);
/** \brief Frees memory and destroys multiple allocations.
Word "pages" is just a suggestion to use this function to free pieces of memory used for sparse binding.
It is just a general purpose function to free memory and destroy allocations made using e.g. vmaAllocateMemory(),
vmaAllocateMemoryPages() and other functions.
It may be internally optimized to be more efficient than calling vmaFreeMemory() `allocationCount` times.
Allocations in `pAllocations` array can come from any memory pools and types.
Passing `VK_NULL_HANDLE` as elements of `pAllocations` array is valid. Such entries are just skipped.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
VmaAllocator VMA_NOT_NULL allocator,
size_t allocationCount,
const VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
/** \brief Returns current information about specified allocation.
Current parameters of given allocation are returned in `pAllocationInfo`.
Although this function doesn't lock any mutex, so it should be quite efficient,
you should avoid calling it too often.
You can retrieve same VmaAllocationInfo structure while creating your resource, from function
vmaCreateBuffer(), vmaCreateImage(). You can remember it if you are sure parameters don't change
(e.g. due to defragmentation).
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VmaAllocationInfo* VMA_NOT_NULL pAllocationInfo);
/** \brief Sets pUserData in given allocation to new value.
The value of pointer `pUserData` is copied to allocation's `pUserData`.
It is opaque, so you can use it however you want - e.g.
as a pointer, ordinal number or some handle to you own data.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
void* VMA_NULLABLE pUserData);
/** \brief Sets pName in given allocation to new value.
`pName` must be either null, or pointer to a null-terminated string. The function
makes local copy of the string and sets it as allocation's `pName`. String
passed as pName doesn't need to be valid for whole lifetime of the allocation -
you can free it after this call. String previously pointed by allocation's
`pName` is freed from memory.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const char* VMA_NULLABLE pName);
/**
\brief Given an allocation, returns Property Flags of its memory type.
This is just a convenience function. Same information can be obtained using
vmaGetAllocationInfo() + vmaGetMemoryProperties().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
/** \brief Maps memory represented by given allocation and returns pointer to it.
Maps memory represented by given allocation to make it accessible to CPU code.
When succeeded, `*ppData` contains pointer to first byte of this memory.
\warning
If the allocation is part of a bigger `VkDeviceMemory` block, returned pointer is
correctly offsetted to the beginning of region assigned to this particular allocation.
Unlike the result of `vkMapMemory`, it points to the allocation, not to the beginning of the whole block.
You should not add VmaAllocationInfo::offset to it!
Mapping is internally reference-counted and synchronized, so despite raw Vulkan
function `vkMapMemory()` cannot be used to map same block of `VkDeviceMemory`
multiple times simultaneously, it is safe to call this function on allocations
assigned to the same memory block. Actual Vulkan memory will be mapped on first
mapping and unmapped on last unmapping.
If the function succeeded, you must call vmaUnmapMemory() to unmap the
allocation when mapping is no longer needed or before freeing the allocation, at
the latest.
It also safe to call this function multiple times on the same allocation. You
must call vmaUnmapMemory() same number of times as you called vmaMapMemory().
It is also safe to call this function on allocation created with
#VMA_ALLOCATION_CREATE_MAPPED_BIT flag. Its memory stays mapped all the time.
You must still call vmaUnmapMemory() same number of times as you called
vmaMapMemory(). You must not call vmaUnmapMemory() additional time to free the
"0-th" mapping made automatically due to #VMA_ALLOCATION_CREATE_MAPPED_BIT flag.
This function fails when used on allocation made in memory type that is not
`HOST_VISIBLE`.
This function doesn't automatically flush or invalidate caches.
If the allocation is made from a memory types that is not `HOST_COHERENT`,
you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
void* VMA_NULLABLE* VMA_NOT_NULL ppData);
/** \brief Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
For details, see description of vmaMapMemory().
This function doesn't automatically flush or invalidate caches.
If the allocation is made from a memory types that is not `HOST_COHERENT`,
you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation);
/** \brief Flushes memory of given allocation.
Calls `vkFlushMappedMemoryRanges()` for memory associated with given range of given allocation.
It needs to be called after writing to a mapped memory for memory types that are not `HOST_COHERENT`.
Unmap operation doesn't do that automatically.
- `offset` must be relative to the beginning of allocation.
- `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
- `offset` and `size` don't have to be aligned.
They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
- If `size` is 0, this call is ignored.
- If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
this call is ignored.
Warning! `offset` and `size` are relative to the contents of given `allocation`.
If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
Do not pass allocation's offset as `offset`!!!
This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
called, otherwise `VK_SUCCESS`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize offset,
VkDeviceSize size);
/** \brief Invalidates memory of given allocation.
Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given range of given allocation.
It needs to be called before reading from a mapped memory for memory types that are not `HOST_COHERENT`.
Map operation doesn't do that automatically.
- `offset` must be relative to the beginning of allocation.
- `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
- `offset` and `size` don't have to be aligned.
They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
- If `size` is 0, this call is ignored.
- If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
this call is ignored.
Warning! `offset` and `size` are relative to the contents of given `allocation`.
If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
Do not pass allocation's offset as `offset`!!!
This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if
it is called, otherwise `VK_SUCCESS`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize offset,
VkDeviceSize size);
/** \brief Flushes memory of given set of allocations.
Calls `vkFlushMappedMemoryRanges()` for memory associated with given ranges of given allocations.
For more information, see documentation of vmaFlushAllocation().
\param allocator
\param allocationCount
\param allocations
\param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all ofsets are zero.
\param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
called, otherwise `VK_SUCCESS`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t allocationCount,
const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
/** \brief Invalidates memory of given set of allocations.
Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given ranges of given allocations.
For more information, see documentation of vmaInvalidateAllocation().
\param allocator
\param allocationCount
\param allocations
\param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all ofsets are zero.
\param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if it is
called, otherwise `VK_SUCCESS`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t allocationCount,
const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
/** \brief Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions.
\param allocator
\param memoryTypeBits Bit mask, where each bit set means that a memory type with that index should be checked.
Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
`VMA_DEBUG_MARGIN` is defined to nonzero and only for memory types that are
`HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
Possible return values:
- `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for any of specified memory types.
- `VK_SUCCESS` - corruption detection has been performed and succeeded.
- `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
`VMA_ASSERT` is also fired in that case.
- Other value: Error returned by Vulkan, e.g. memory mapping failure.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
VmaAllocator VMA_NOT_NULL allocator,
uint32_t memoryTypeBits);
/** \brief Begins defragmentation process.
\param allocator Allocator object.
\param pInfo Structure filled with parameters of defragmentation.
\param[out] pContext Context object that must be passed to vmaEndDefragmentation() to finish defragmentation.
\returns
- `VK_SUCCESS` if defragmentation can begin.
- `VK_ERROR_FEATURE_NOT_PRESENT` if defragmentation is not supported.
For more information about defragmentation, see documentation chapter:
[Defragmentation](@ref defragmentation).
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
VmaAllocator VMA_NOT_NULL allocator,
const VmaDefragmentationInfo* VMA_NOT_NULL pInfo,
VmaDefragmentationContext VMA_NULLABLE* VMA_NOT_NULL pContext);
/** \brief Ends defragmentation process.
\param allocator Allocator object.
\param context Context object that has been created by vmaBeginDefragmentation().
\param[out] pStats Optional stats for the defragmentation. Can be null.
Use this function to finish defragmentation started by vmaBeginDefragmentation().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
VmaAllocator VMA_NOT_NULL allocator,
VmaDefragmentationContext VMA_NOT_NULL context,
VmaDefragmentationStats* VMA_NULLABLE pStats);
/** \brief Starts single defragmentation pass.
\param allocator Allocator object.
\param context Context object that has been created by vmaBeginDefragmentation().
\param[out] pPassInfo Computed information for current pass.
\returns
- `VK_SUCCESS` if no more moves are possible. Then you can omit call to vmaEndDefragmentationPass() and simply end whole defragmentation.
- `VK_INCOMPLETE` if there are pending moves returned in `pPassInfo`. You need to perform them, call vmaEndDefragmentationPass(),
and then preferably try another pass with vmaBeginDefragmentationPass().
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
VmaAllocator VMA_NOT_NULL allocator,
VmaDefragmentationContext VMA_NOT_NULL context,
VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
/** \brief Ends single defragmentation pass.
\param allocator Allocator object.
\param context Context object that has been created by vmaBeginDefragmentation().
\param pPassInfo Computed information for current pass filled by vmaBeginDefragmentationPass() and possibly modified by you.
Returns `VK_SUCCESS` if no more moves are possible or `VK_INCOMPLETE` if more defragmentations are possible.
Ends incremental defragmentation pass and commits all defragmentation moves from `pPassInfo`.
After this call:
- Allocations at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY
(which is the default) will be pointing to the new destination place.
- Allocation at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
will be freed.
If no more moves are possible you can end whole defragmentation.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
VmaAllocator VMA_NOT_NULL allocator,
VmaDefragmentationContext VMA_NOT_NULL context,
VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
/** \brief Binds buffer to allocation.
Binds specified buffer to region of memory represented by specified allocation.
Gets `VkDeviceMemory` handle and offset from the allocation.
If you want to create a buffer, allocate memory for it and bind them together separately,
you should use this function for binding instead of standard `vkBindBufferMemory()`,
because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
(which is illegal in Vulkan).
It is recommended to use function vmaCreateBuffer() instead of this one.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
/** \brief Binds buffer to allocation with additional parameters.
\param allocator
\param allocation
\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
\param buffer
\param pNext A chain of structures to be attached to `VkBindBufferMemoryInfoKHR` structure used internally. Normally it should be null.
This function is similar to vmaBindBufferMemory(), but it provides additional parameters.
If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindBufferMemoryInfoKHR) pNext);
/** \brief Binds image to allocation.
Binds specified image to region of memory represented by specified allocation.
Gets `VkDeviceMemory` handle and offset from the allocation.
If you want to create an image, allocate memory for it and bind them together separately,
you should use this function for binding instead of standard `vkBindImageMemory()`,
because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
(which is illegal in Vulkan).
It is recommended to use function vmaCreateImage() instead of this one.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
/** \brief Binds image to allocation with additional parameters.
\param allocator
\param allocation
\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
\param image
\param pNext A chain of structures to be attached to `VkBindImageMemoryInfoKHR` structure used internally. Normally it should be null.
This function is similar to vmaBindImageMemory(), but it provides additional parameters.
If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindImageMemoryInfoKHR) pNext);
/** \brief Creates a new `VkBuffer`, allocates and binds memory for it.
\param allocator
\param pBufferCreateInfo
\param pAllocationCreateInfo
\param[out] pBuffer Buffer that was created.
\param[out] pAllocation Allocation that was created.
\param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
This function automatically:
-# Creates buffer.
-# Allocates appropriate memory for it.
-# Binds the buffer with the memory.
If any of these operations fail, buffer and allocation are not created,
returned value is negative error code, `*pBuffer` and `*pAllocation` are null.
If the function succeeded, you must destroy both buffer and allocation when you
no longer need them using either convenience function vmaDestroyBuffer() or
separately, using `vkDestroyBuffer()` and vmaFreeMemory().
If #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag was used,
VK_KHR_dedicated_allocation extension is used internally to query driver whether
it requires or prefers the new buffer to have dedicated allocation. If yes,
and if dedicated allocation is possible
(#VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT is not used), it creates dedicated
allocation for this buffer, just like when using
#VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
\note This function creates a new `VkBuffer`. Sub-allocation of parts of one large buffer,
although recommended as a good practice, is out of scope of this library and could be implemented
by the user as a higher-level logic on top of VMA.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
VmaAllocator VMA_NOT_NULL allocator,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/** \brief Creates a buffer with additional minimum alignment.
Similar to vmaCreateBuffer() but provides additional parameter `minAlignment` which allows to specify custom,
minimum alignment to be used when placing the buffer inside a larger memory block, which may be needed e.g.
for interop with OpenGL.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
VmaAllocator VMA_NOT_NULL allocator,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
VkDeviceSize minAlignment,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/** \brief Creates a new `VkBuffer`, binds already created memory for it.
\param allocator
\param allocation Allocation that provides memory to be used for binding new buffer to it.
\param pBufferCreateInfo
\param[out] pBuffer Buffer that was created.
This function automatically:
-# Creates buffer.
-# Binds the buffer with the supplied memory.
If any of these operations fail, buffer is not created,
returned value is negative error code and `*pBuffer` is null.
If the function succeeded, you must destroy the buffer when you
no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
allocation you can use convenience function vmaDestroyBuffer().
\note There is a new version of this function augmented with parameter `allocationLocalOffset` - see vmaCreateAliasingBuffer2().
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
/** \brief Creates a new `VkBuffer`, binds already created memory for it.
\param allocator
\param allocation Allocation that provides memory to be used for binding new buffer to it.
\param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the allocation. Normally it should be 0.
\param pBufferCreateInfo
\param[out] pBuffer Buffer that was created.
This function automatically:
-# Creates buffer.
-# Binds the buffer with the supplied memory.
If any of these operations fail, buffer is not created,
returned value is negative error code and `*pBuffer` is null.
If the function succeeded, you must destroy the buffer when you
no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
allocation you can use convenience function vmaDestroyBuffer().
\note This is a new version of the function augmented with parameter `allocationLocalOffset`.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
/** \brief Destroys Vulkan buffer and frees allocated memory.
This is just a convenience function equivalent to:
\code
vkDestroyBuffer(device, buffer, allocationCallbacks);
vmaFreeMemory(allocator, allocation);
\endcode
It is safe to pass null as buffer and/or allocation.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
VmaAllocator VMA_NOT_NULL allocator,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
VmaAllocation VMA_NULLABLE allocation);
/// Function similar to vmaCreateBuffer().
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
VmaAllocator VMA_NOT_NULL allocator,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage,
VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
/// Function similar to vmaCreateAliasingBuffer() but for images.
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
/// Function similar to vmaCreateAliasingBuffer2() but for images.
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
/** \brief Destroys Vulkan image and frees allocated memory.
This is just a convenience function equivalent to:
\code
vkDestroyImage(device, image, allocationCallbacks);
vmaFreeMemory(allocator, allocation);
\endcode
It is safe to pass null as image and/or allocation.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
VmaAllocator VMA_NOT_NULL allocator,
VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
VmaAllocation VMA_NULLABLE allocation);
/** @} */
/**
\addtogroup group_virtual
@{
*/
/** \brief Creates new #VmaVirtualBlock object.
\param pCreateInfo Parameters for creation.
\param[out] pVirtualBlock Returned virtual block object or `VMA_NULL` if creation failed.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaVirtualBlock VMA_NULLABLE* VMA_NOT_NULL pVirtualBlock);
/** \brief Destroys #VmaVirtualBlock object.
Please note that you should consciously handle virtual allocations that could remain unfreed in the block.
You should either free them individually using vmaVirtualFree() or call vmaClearVirtualBlock()
if you are sure this is what you want. If you do neither, an assert is called.
If you keep pointers to some additional metadata associated with your virtual allocations in their `pUserData`,
don't forget to free them.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(
VmaVirtualBlock VMA_NULLABLE virtualBlock);
/** \brief Returns true of the #VmaVirtualBlock is empty - contains 0 virtual allocations and has all its space available for new allocations.
*/
VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(
VmaVirtualBlock VMA_NOT_NULL virtualBlock);
/** \brief Returns information about a specific virtual allocation within a virtual block, like its size and `pUserData` pointer.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo);
/** \brief Allocates new virtual allocation inside given #VmaVirtualBlock.
If the allocation fails due to not enough free space available, `VK_ERROR_OUT_OF_DEVICE_MEMORY` is returned
(despite the function doesn't ever allocate actual GPU memory).
`pAllocation` is then set to `VK_NULL_HANDLE` and `pOffset`, if not null, it set to `UINT64_MAX`.
\param virtualBlock Virtual block
\param pCreateInfo Parameters for the allocation
\param[out] pAllocation Returned handle of the new allocation
\param[out] pOffset Returned offset of the new allocation. Optional, can be null.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
VkDeviceSize* VMA_NULLABLE pOffset);
/** \brief Frees virtual allocation inside given #VmaVirtualBlock.
It is correct to call this function with `allocation == VK_NULL_HANDLE` - it does nothing.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation);
/** \brief Frees all virtual allocations inside given #VmaVirtualBlock.
You must either call this function or free each virtual allocation individually with vmaVirtualFree()
before destroying a virtual block. Otherwise, an assert is called.
If you keep pointer to some additional metadata associated with your virtual allocation in its `pUserData`,
don't forget to free it as well.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(
VmaVirtualBlock VMA_NOT_NULL virtualBlock);
/** \brief Changes custom pointer associated with given virtual allocation.
*/
VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,
void* VMA_NULLABLE pUserData);
/** \brief Calculates and returns statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
This function is fast to call. For more detailed statistics, see vmaCalculateVirtualBlockStatistics().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaStatistics* VMA_NOT_NULL pStats);
/** \brief Calculates and returns detailed statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
This function is slow to call. Use for debugging purposes.
For less detailed statistics, see vmaGetVirtualBlockStatistics().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaDetailedStatistics* VMA_NOT_NULL pStats);
/** @} */
#if VMA_STATS_STRING_ENABLED
/**
\addtogroup group_stats
@{
*/
/** \brief Builds and returns a null-terminated string in JSON format with information about given #VmaVirtualBlock.
\param virtualBlock Virtual block.
\param[out] ppStatsString Returned string.
\param detailedMap Pass `VK_FALSE` to only obtain statistics as returned by vmaCalculateVirtualBlockStatistics(). Pass `VK_TRUE` to also obtain full list of allocations and free spaces.
Returned string must be freed using vmaFreeVirtualBlockStatsString().
*/
VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
VkBool32 detailedMap);
/// Frees a string returned by vmaBuildVirtualBlockStatsString().
VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(
VmaVirtualBlock VMA_NOT_NULL virtualBlock,
char* VMA_NULLABLE pStatsString);
/** \brief Builds and returns statistics as a null-terminated string in JSON format.
\param allocator
\param[out] ppStatsString Must be freed using vmaFreeStatsString() function.
\param detailedMap
*/
VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
VmaAllocator VMA_NOT_NULL allocator,
char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
VkBool32 detailedMap);
VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
VmaAllocator VMA_NOT_NULL allocator,
char* VMA_NULLABLE pStatsString);
/** @} */
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_FUNCTION_HEADERS
#ifdef __cplusplus
}
#endif
#endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
//
// IMPLEMENTATION
//
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
// For Visual Studio IntelliSense.
#if defined(__cplusplus) && defined(__INTELLISENSE__)
#define VMA_IMPLEMENTATION
#endif
#ifdef VMA_IMPLEMENTATION
#undef VMA_IMPLEMENTATION
#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <utility>
#include <type_traits>
#ifdef _MSC_VER
#include <intrin.h> // For functions like __popcnt, _BitScanForward etc.
#endif
#if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
#include <bit> // For std::popcount
#endif
#if VMA_STATS_STRING_ENABLED
#include <cstdio> // For snprintf
#endif
/*******************************************************************************
CONFIGURATION SECTION
Define some of these macros before each #include of this header or change them
here if you need other then default behavior depending on your environment.
*/
#ifndef _VMA_CONFIGURATION
/*
Define this macro to 1 to make the library fetch pointers to Vulkan functions
internally, like:
vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
*/
#if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
#define VMA_STATIC_VULKAN_FUNCTIONS 1
#endif
/*
Define this macro to 1 to make the library fetch pointers to Vulkan functions
internally, like:
vulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkGetDeviceProcAddr(device, "vkAllocateMemory");
To use this feature in new versions of VMA you now have to pass
VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as
VmaAllocatorCreateInfo::pVulkanFunctions. Other members can be null.
*/
#if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
#define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
#endif
#ifndef VMA_USE_STL_SHARED_MUTEX
#if __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
#define VMA_USE_STL_SHARED_MUTEX 1
// Visual studio defines __cplusplus properly only when passed additional parameter: /Zc:__cplusplus
// Otherwise it is always 199711L, despite shared_mutex works since Visual Studio 2015 Update 2.
#elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
#define VMA_USE_STL_SHARED_MUTEX 1
#else
#define VMA_USE_STL_SHARED_MUTEX 0
#endif
#endif
/*
Define this macro to include custom header files without having to edit this file directly, e.g.:
// Inside of "my_vma_configuration_user_includes.h":
#include "my_custom_assert.h" // for MY_CUSTOM_ASSERT
#include "my_custom_min.h" // for my_custom_min
#include <algorithm>
#include <mutex>
// Inside a different file, which includes "vk_mem_alloc.h":
#define VMA_CONFIGURATION_USER_INCLUDES_H "my_vma_configuration_user_includes.h"
#define VMA_ASSERT(expr) MY_CUSTOM_ASSERT(expr)
#define VMA_MIN(v1, v2) (my_custom_min(v1, v2))
#include "vk_mem_alloc.h"
...
The following headers are used in this CONFIGURATION section only, so feel free to
remove them if not needed.
*/
#if !defined(VMA_CONFIGURATION_USER_INCLUDES_H)
#include <cassert> // for assert
#include <algorithm> // for min, max
#include <mutex>
#else
#include VMA_CONFIGURATION_USER_INCLUDES_H
#endif
#ifndef VMA_NULL
// Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
#define VMA_NULL nullptr
#endif
// Used to silence warnings for implicit fallthrough.
#ifndef VMA_FALLTHROUGH
#if __has_cpp_attribute(clang::fallthrough)
#define VMA_FALLTHROUGH [[clang::fallthrough]];
#elif __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
#define VMA_FALLTHROUGH [[fallthrough]]
#else
#define VMA_FALLTHROUGH
#endif
#endif
// Normal assert to check for programmer's errors, especially in Debug configuration.
#ifndef VMA_ASSERT
#ifdef NDEBUG
#define VMA_ASSERT(expr)
#else
#define VMA_ASSERT(expr) assert(expr)
#endif
#endif
// Assert that will be called very often, like inside data structures e.g. operator[].
// Making it non-empty can make program slow.
#ifndef VMA_HEAVY_ASSERT
#ifdef NDEBUG
#define VMA_HEAVY_ASSERT(expr)
#else
#define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
#endif
#endif
// If your compiler is not compatible with C++17 and definition of
// aligned_alloc() function is missing, uncommenting following line may help:
//#include <malloc.h>
#if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
#include <cstdlib>
void* vma_aligned_alloc(size_t alignment, size_t size)
{
// alignment must be >= sizeof(void*)
if(alignment < sizeof(void*))
{
alignment = sizeof(void*);
}
return memalign(alignment, size);
}
#elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
#include <cstdlib>
#if defined(__APPLE__)
#include <AvailabilityMacros.h>
#endif
void *vma_aligned_alloc(size_t alignment, size_t size)
{
// Unfortunately, aligned_alloc causes VMA to crash due to it returning null pointers. (At least under 11.4)
// Therefore, for now disable this specific exception until a proper solution is found.
//#if defined(__APPLE__) && (defined(MAC_OS_X_VERSION_10_16) || defined(__IPHONE_14_0))
//#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_16 || __IPHONE_OS_VERSION_MAX_ALLOWED >= __IPHONE_14_0
// // For C++14, usr/include/malloc/_malloc.h declares aligned_alloc()) only
// // with the MacOSX11.0 SDK in Xcode 12 (which is what adds
// // MAC_OS_X_VERSION_10_16), even though the function is marked
// // available for 10.15. That is why the preprocessor checks for 10.16 but
// // the __builtin_available checks for 10.15.
// // People who use C++17 could call aligned_alloc with the 10.15 SDK already.
// if (__builtin_available(macOS 10.15, iOS 13, *))
// return aligned_alloc(alignment, size);
//#endif
//#endif
// alignment must be >= sizeof(void*)
if(alignment < sizeof(void*))
{
alignment = sizeof(void*);
}
void *pointer;
if(posix_memalign(&pointer, alignment, size) == 0)
return pointer;
return VMA_NULL;
}
#elif defined(_WIN32)
void* vma_aligned_alloc(size_t alignment, size_t size)
{
return _aligned_malloc(size, alignment);
}
#elif __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
void* vma_aligned_alloc(size_t alignment, size_t size)
{
return aligned_alloc(alignment, size);
}
#else
void* vma_aligned_alloc(size_t alignment, size_t size)
{
VMA_ASSERT(0 && "Could not implement aligned_alloc automatically. Please enable C++17 or later in your compiler or provide custom implementation of macro VMA_SYSTEM_ALIGNED_MALLOC (and VMA_SYSTEM_ALIGNED_FREE if needed) using the API of your system.");
return VMA_NULL;
}
#endif
#if defined(_WIN32)
static void vma_aligned_free(void* ptr)
{
_aligned_free(ptr);
}
#else
static void vma_aligned_free(void* VMA_NULLABLE ptr)
{
free(ptr);
}
#endif
#ifndef VMA_ALIGN_OF
#define VMA_ALIGN_OF(type) (alignof(type))
#endif
#ifndef VMA_SYSTEM_ALIGNED_MALLOC
#define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) vma_aligned_alloc((alignment), (size))
#endif
#ifndef VMA_SYSTEM_ALIGNED_FREE
// VMA_SYSTEM_FREE is the old name, but might have been defined by the user
#if defined(VMA_SYSTEM_FREE)
#define VMA_SYSTEM_ALIGNED_FREE(ptr) VMA_SYSTEM_FREE(ptr)
#else
#define VMA_SYSTEM_ALIGNED_FREE(ptr) vma_aligned_free(ptr)
#endif
#endif
#ifndef VMA_COUNT_BITS_SET
// Returns number of bits set to 1 in (v)
#define VMA_COUNT_BITS_SET(v) VmaCountBitsSet(v)
#endif
#ifndef VMA_BITSCAN_LSB
// Scans integer for index of first nonzero value from the Least Significant Bit (LSB). If mask is 0 then returns UINT8_MAX
#define VMA_BITSCAN_LSB(mask) VmaBitScanLSB(mask)
#endif
#ifndef VMA_BITSCAN_MSB
// Scans integer for index of first nonzero value from the Most Significant Bit (MSB). If mask is 0 then returns UINT8_MAX
#define VMA_BITSCAN_MSB(mask) VmaBitScanMSB(mask)
#endif
#ifndef VMA_MIN
#define VMA_MIN(v1, v2) ((std::min)((v1), (v2)))
#endif
#ifndef VMA_MAX
#define VMA_MAX(v1, v2) ((std::max)((v1), (v2)))
#endif
#ifndef VMA_SWAP
#define VMA_SWAP(v1, v2) std::swap((v1), (v2))
#endif
#ifndef VMA_SORT
#define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
#endif
#ifndef VMA_DEBUG_LOG_FORMAT
#define VMA_DEBUG_LOG_FORMAT(format, ...)
/*
#define VMA_DEBUG_LOG_FORMAT(format, ...) do { \
printf((format), __VA_ARGS__); \
printf("\n"); \
} while(false)
*/
#endif
#ifndef VMA_DEBUG_LOG
#define VMA_DEBUG_LOG(str) VMA_DEBUG_LOG_FORMAT("%s", (str))
#endif
#ifndef VMA_CLASS_NO_COPY
#define VMA_CLASS_NO_COPY(className) \
private: \
className(const className&) = delete; \
className& operator=(const className&) = delete;
#endif
#ifndef VMA_CLASS_NO_COPY_NO_MOVE
#define VMA_CLASS_NO_COPY_NO_MOVE(className) \
private: \
className(const className&) = delete; \
className(className&&) = delete; \
className& operator=(const className&) = delete; \
className& operator=(className&&) = delete;
#endif
// Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
#if VMA_STATS_STRING_ENABLED
static inline void VmaUint32ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint32_t num)
{
snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
}
static inline void VmaUint64ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint64_t num)
{
snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
}
static inline void VmaPtrToStr(char* VMA_NOT_NULL outStr, size_t strLen, const void* ptr)
{
snprintf(outStr, strLen, "%p", ptr);
}
#endif
#ifndef VMA_MUTEX
class VmaMutex
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaMutex)
public:
VmaMutex() { }
void Lock() { m_Mutex.lock(); }
void Unlock() { m_Mutex.unlock(); }
bool TryLock() { return m_Mutex.try_lock(); }
private:
std::mutex m_Mutex;
};
#define VMA_MUTEX VmaMutex
#endif
// Read-write mutex, where "read" is shared access, "write" is exclusive access.
#ifndef VMA_RW_MUTEX
#if VMA_USE_STL_SHARED_MUTEX
// Use std::shared_mutex from C++17.
#include <shared_mutex>
class VmaRWMutex
{
public:
void LockRead() { m_Mutex.lock_shared(); }
void UnlockRead() { m_Mutex.unlock_shared(); }
bool TryLockRead() { return m_Mutex.try_lock_shared(); }
void LockWrite() { m_Mutex.lock(); }
void UnlockWrite() { m_Mutex.unlock(); }
bool TryLockWrite() { return m_Mutex.try_lock(); }
private:
std::shared_mutex m_Mutex;
};
#define VMA_RW_MUTEX VmaRWMutex
#elif defined(_WIN32) && defined(WINVER) && WINVER >= 0x0600
// Use SRWLOCK from WinAPI.
// Minimum supported client = Windows Vista, server = Windows Server 2008.
class VmaRWMutex
{
public:
VmaRWMutex() { InitializeSRWLock(&m_Lock); }
void LockRead() { AcquireSRWLockShared(&m_Lock); }
void UnlockRead() { ReleaseSRWLockShared(&m_Lock); }
bool TryLockRead() { return TryAcquireSRWLockShared(&m_Lock) != FALSE; }
void LockWrite() { AcquireSRWLockExclusive(&m_Lock); }
void UnlockWrite() { ReleaseSRWLockExclusive(&m_Lock); }
bool TryLockWrite() { return TryAcquireSRWLockExclusive(&m_Lock) != FALSE; }
private:
SRWLOCK m_Lock;
};
#define VMA_RW_MUTEX VmaRWMutex
#else
// Less efficient fallback: Use normal mutex.
class VmaRWMutex
{
public:
void LockRead() { m_Mutex.Lock(); }
void UnlockRead() { m_Mutex.Unlock(); }
bool TryLockRead() { return m_Mutex.TryLock(); }
void LockWrite() { m_Mutex.Lock(); }
void UnlockWrite() { m_Mutex.Unlock(); }
bool TryLockWrite() { return m_Mutex.TryLock(); }
private:
VMA_MUTEX m_Mutex;
};
#define VMA_RW_MUTEX VmaRWMutex
#endif // #if VMA_USE_STL_SHARED_MUTEX
#endif // #ifndef VMA_RW_MUTEX
/*
If providing your own implementation, you need to implement a subset of std::atomic.
*/
#ifndef VMA_ATOMIC_UINT32
#include <atomic>
#define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
#endif
#ifndef VMA_ATOMIC_UINT64
#include <atomic>
#define VMA_ATOMIC_UINT64 std::atomic<uint64_t>
#endif
#ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
/**
Every allocation will have its own memory block.
Define to 1 for debugging purposes only.
*/
#define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
#endif
#ifndef VMA_MIN_ALIGNMENT
/**
Minimum alignment of all allocations, in bytes.
Set to more than 1 for debugging purposes. Must be power of two.
*/
#ifdef VMA_DEBUG_ALIGNMENT // Old name
#define VMA_MIN_ALIGNMENT VMA_DEBUG_ALIGNMENT
#else
#define VMA_MIN_ALIGNMENT (1)
#endif
#endif
#ifndef VMA_DEBUG_MARGIN
/**
Minimum margin after every allocation, in bytes.
Set nonzero for debugging purposes only.
*/
#define VMA_DEBUG_MARGIN (0)
#endif
#ifndef VMA_DEBUG_INITIALIZE_ALLOCATIONS
/**
Define this macro to 1 to automatically fill new allocations and destroyed
allocations with some bit pattern.
*/
#define VMA_DEBUG_INITIALIZE_ALLOCATIONS (0)
#endif
#ifndef VMA_DEBUG_DETECT_CORRUPTION
/**
Define this macro to 1 together with non-zero value of VMA_DEBUG_MARGIN to
enable writing magic value to the margin after every allocation and
validating it, so that memory corruptions (out-of-bounds writes) are detected.
*/
#define VMA_DEBUG_DETECT_CORRUPTION (0)
#endif
#ifndef VMA_DEBUG_GLOBAL_MUTEX
/**
Set this to 1 for debugging purposes only, to enable single mutex protecting all
entry calls to the library. Can be useful for debugging multithreading issues.
*/
#define VMA_DEBUG_GLOBAL_MUTEX (0)
#endif
#ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
/**
Minimum value for VkPhysicalDeviceLimits::bufferImageGranularity.
Set to more than 1 for debugging purposes only. Must be power of two.
*/
#define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
#endif
#ifndef VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
/*
Set this to 1 to make VMA never exceed VkPhysicalDeviceLimits::maxMemoryAllocationCount
and return error instead of leaving up to Vulkan implementation what to do in such cases.
*/
#define VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT (0)
#endif
#ifndef VMA_SMALL_HEAP_MAX_SIZE
/// Maximum size of a memory heap in Vulkan to consider it "small".
#define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
#endif
#ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
/// Default size of a block allocated as single VkDeviceMemory from a "large" heap.
#define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
#endif
/*
Mapping hysteresis is a logic that launches when vmaMapMemory/vmaUnmapMemory is called
or a persistently mapped allocation is created and destroyed several times in a row.
It keeps additional +1 mapping of a device memory block to prevent calling actual
vkMapMemory/vkUnmapMemory too many times, which may improve performance and help
tools like RenderDoc.
*/
#ifndef VMA_MAPPING_HYSTERESIS_ENABLED
#define VMA_MAPPING_HYSTERESIS_ENABLED 1
#endif
#define VMA_VALIDATE(cond) do { if(!(cond)) { \
VMA_ASSERT(0 && "Validation failed: " #cond); \
return false; \
} } while(false)
/*******************************************************************************
END OF CONFIGURATION
*/
#endif // _VMA_CONFIGURATION
static const uint8_t VMA_ALLOCATION_FILL_PATTERN_CREATED = 0xDC;
static const uint8_t VMA_ALLOCATION_FILL_PATTERN_DESTROYED = 0xEF;
// Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
// Copy of some Vulkan definitions so we don't need to check their existence just to handle few constants.
static const uint32_t VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY = 0x00000040;
static const uint32_t VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY = 0x00000080;
static const uint32_t VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY = 0x00020000;
static const uint32_t VK_IMAGE_CREATE_DISJOINT_BIT_COPY = 0x00000200;
static const int32_t VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY = 1000158000;
static const uint32_t VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET = 0x10000000u;
static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
static const uint32_t VMA_VENDOR_ID_AMD = 4098;
// This one is tricky. Vulkan specification defines this code as available since
// Vulkan 1.0, but doesn't actually define it in Vulkan SDK earlier than 1.2.131.
// See pull request #207.
#define VK_ERROR_UNKNOWN_COPY ((VkResult)-13)
#if VMA_STATS_STRING_ENABLED
// Correspond to values of enum VmaSuballocationType.
static const char* VMA_SUBALLOCATION_TYPE_NAMES[] =
{
"FREE",
"UNKNOWN",
"BUFFER",
"IMAGE_UNKNOWN",
"IMAGE_LINEAR",
"IMAGE_OPTIMAL",
};
#endif
static VkAllocationCallbacks VmaEmptyAllocationCallbacks =
{ VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
#ifndef _VMA_ENUM_DECLARATIONS
enum VmaSuballocationType
{
VMA_SUBALLOCATION_TYPE_FREE = 0,
VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
VMA_SUBALLOCATION_TYPE_BUFFER = 2,
VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
};
enum VMA_CACHE_OPERATION
{
VMA_CACHE_FLUSH,
VMA_CACHE_INVALIDATE
};
enum class VmaAllocationRequestType
{
Normal,
TLSF,
// Used by "Linear" algorithm.
UpperAddress,
EndOf1st,
EndOf2nd,
};
#endif // _VMA_ENUM_DECLARATIONS
#ifndef _VMA_FORWARD_DECLARATIONS
// Opaque handle used by allocation algorithms to identify single allocation in any conforming way.
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaAllocHandle)
struct VmaMutexLock;
struct VmaMutexLockRead;
struct VmaMutexLockWrite;
template<typename T>
struct AtomicTransactionalIncrement;
template<typename T>
struct VmaStlAllocator;
template<typename T, typename AllocatorT>
class VmaVector;
template<typename T, typename AllocatorT, size_t N>
class VmaSmallVector;
template<typename T>
class VmaPoolAllocator;
template<typename T>
struct VmaListItem;
template<typename T>
class VmaRawList;
template<typename T, typename AllocatorT>
class VmaList;
template<typename ItemTypeTraits>
class VmaIntrusiveLinkedList;
// Unused in this version
#if 0
template<typename T1, typename T2>
struct VmaPair;
template<typename FirstT, typename SecondT>
struct VmaPairFirstLess;
template<typename KeyT, typename ValueT>
class VmaMap;
#endif
#if VMA_STATS_STRING_ENABLED
class VmaStringBuilder;
class VmaJsonWriter;
#endif
class VmaDeviceMemoryBlock;
struct VmaDedicatedAllocationListItemTraits;
class VmaDedicatedAllocationList;
struct VmaSuballocation;
struct VmaSuballocationOffsetLess;
struct VmaSuballocationOffsetGreater;
struct VmaSuballocationItemSizeLess;
typedef VmaList<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> VmaSuballocationList;
struct VmaAllocationRequest;
class VmaBlockMetadata;
class VmaBlockMetadata_Linear;
class VmaBlockMetadata_TLSF;
class VmaBlockVector;
struct VmaPoolListItemTraits;
struct VmaCurrentBudgetData;
class VmaAllocationObjectAllocator;
#endif // _VMA_FORWARD_DECLARATIONS
#ifndef _VMA_FUNCTIONS
/*
Returns number of bits set to 1 in (v).
On specific platforms and compilers you can use instrinsics like:
Visual Studio:
return __popcnt(v);
GCC, Clang:
return static_cast<uint32_t>(__builtin_popcount(v));
Define macro VMA_COUNT_BITS_SET to provide your optimized implementation.
But you need to check in runtime whether user's CPU supports these, as some old processors don't.
*/
static inline uint32_t VmaCountBitsSet(uint32_t v)
{
#if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
return std::popcount(v);
#else
uint32_t c = v - ((v >> 1) & 0x55555555);
c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
c = ((c >> 4) + c) & 0x0F0F0F0F;
c = ((c >> 8) + c) & 0x00FF00FF;
c = ((c >> 16) + c) & 0x0000FFFF;
return c;
#endif
}
static inline uint8_t VmaBitScanLSB(uint64_t mask)
{
#if defined(_MSC_VER) && defined(_WIN64)
unsigned long pos;
if (_BitScanForward64(&pos, mask))
return static_cast<uint8_t>(pos);
return UINT8_MAX;
#elif defined __GNUC__ || defined __clang__
return static_cast<uint8_t>(__builtin_ffsll(mask)) - 1U;
#else
uint8_t pos = 0;
uint64_t bit = 1;
do
{
if (mask & bit)
return pos;
bit <<= 1;
} while (pos++ < 63);
return UINT8_MAX;
#endif
}
static inline uint8_t VmaBitScanLSB(uint32_t mask)
{
#ifdef _MSC_VER
unsigned long pos;
if (_BitScanForward(&pos, mask))
return static_cast<uint8_t>(pos);
return UINT8_MAX;
#elif defined __GNUC__ || defined __clang__
return static_cast<uint8_t>(__builtin_ffs(mask)) - 1U;
#else
uint8_t pos = 0;
uint32_t bit = 1;
do
{
if (mask & bit)
return pos;
bit <<= 1;
} while (pos++ < 31);
return UINT8_MAX;
#endif
}
static inline uint8_t VmaBitScanMSB(uint64_t mask)
{
#if defined(_MSC_VER) && defined(_WIN64)
unsigned long pos;
if (_BitScanReverse64(&pos, mask))
return static_cast<uint8_t>(pos);
#elif defined __GNUC__ || defined __clang__
if (mask)
return 63 - static_cast<uint8_t>(__builtin_clzll(mask));
#else
uint8_t pos = 63;
uint64_t bit = 1ULL << 63;
do
{
if (mask & bit)
return pos;
bit >>= 1;
} while (pos-- > 0);
#endif
return UINT8_MAX;
}
static inline uint8_t VmaBitScanMSB(uint32_t mask)
{
#ifdef _MSC_VER
unsigned long pos;
if (_BitScanReverse(&pos, mask))
return static_cast<uint8_t>(pos);
#elif defined __GNUC__ || defined __clang__
if (mask)
return 31 - static_cast<uint8_t>(__builtin_clz(mask));
#else
uint8_t pos = 31;
uint32_t bit = 1UL << 31;
do
{
if (mask & bit)
return pos;
bit >>= 1;
} while (pos-- > 0);
#endif
return UINT8_MAX;
}
/*
Returns true if given number is a power of two.
T must be unsigned integer number or signed integer but always nonnegative.
For 0 returns true.
*/
template <typename T>
inline bool VmaIsPow2(T x)
{
return (x & (x - 1)) == 0;
}
// Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
// Use types like uint32_t, uint64_t as T.
template <typename T>
static inline T VmaAlignUp(T val, T alignment)
{
VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
return (val + alignment - 1) & ~(alignment - 1);
}
// Aligns given value down to nearest multiply of align value. For example: VmaAlignDown(11, 8) = 8.
// Use types like uint32_t, uint64_t as T.
template <typename T>
static inline T VmaAlignDown(T val, T alignment)
{
VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
return val & ~(alignment - 1);
}
// Division with mathematical rounding to nearest number.
template <typename T>
static inline T VmaRoundDiv(T x, T y)
{
return (x + (y / (T)2)) / y;
}
// Divide by 'y' and round up to nearest integer.
template <typename T>
static inline T VmaDivideRoundingUp(T x, T y)
{
return (x + y - (T)1) / y;
}
// Returns smallest power of 2 greater or equal to v.
static inline uint32_t VmaNextPow2(uint32_t v)
{
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v++;
return v;
}
static inline uint64_t VmaNextPow2(uint64_t v)
{
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v |= v >> 32;
v++;
return v;
}
// Returns largest power of 2 less or equal to v.
static inline uint32_t VmaPrevPow2(uint32_t v)
{
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v = v ^ (v >> 1);
return v;
}
static inline uint64_t VmaPrevPow2(uint64_t v)
{
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
v |= v >> 32;
v = v ^ (v >> 1);
return v;
}
static inline bool VmaStrIsEmpty(const char* pStr)
{
return pStr == VMA_NULL || *pStr == '\0';
}
/*
Returns true if two memory blocks occupy overlapping pages.
ResourceA must be in less memory offset than ResourceB.
Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
*/
static inline bool VmaBlocksOnSamePage(
VkDeviceSize resourceAOffset,
VkDeviceSize resourceASize,
VkDeviceSize resourceBOffset,
VkDeviceSize pageSize)
{
VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
VkDeviceSize resourceBStart = resourceBOffset;
VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
return resourceAEndPage == resourceBStartPage;
}
/*
Returns true if given suballocation types could conflict and must respect
VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
or linear image and another one is optimal image. If type is unknown, behave
conservatively.
*/
static inline bool VmaIsBufferImageGranularityConflict(
VmaSuballocationType suballocType1,
VmaSuballocationType suballocType2)
{
if (suballocType1 > suballocType2)
{
VMA_SWAP(suballocType1, suballocType2);
}
switch (suballocType1)
{
case VMA_SUBALLOCATION_TYPE_FREE:
return false;
case VMA_SUBALLOCATION_TYPE_UNKNOWN:
return true;
case VMA_SUBALLOCATION_TYPE_BUFFER:
return
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
return
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
return
suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
return false;
default:
VMA_ASSERT(0);
return true;
}
}
static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
{
#if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
uint32_t* pDst = (uint32_t*)((char*)pData + offset);
const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
for (size_t i = 0; i < numberCount; ++i, ++pDst)
{
*pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
}
#else
// no-op
#endif
}
static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
{
#if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
for (size_t i = 0; i < numberCount; ++i, ++pSrc)
{
if (*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
{
return false;
}
}
#endif
return true;
}
/*
Fills structure with parameters of an example buffer to be used for transfers
during GPU memory defragmentation.
*/
static void VmaFillGpuDefragmentationBufferCreateInfo(VkBufferCreateInfo& outBufCreateInfo)
{
memset(&outBufCreateInfo, 0, sizeof(outBufCreateInfo));
outBufCreateInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
outBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
outBufCreateInfo.size = (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE; // Example size.
}
/*
Performs binary search and returns iterator to first element that is greater or
equal to (key), according to comparison (cmp).
Cmp should return true if first argument is less than second argument.
Returned value is the found element, if present in the collection or place where
new element with value (key) should be inserted.
*/
template <typename CmpLess, typename IterT, typename KeyT>
static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT& key, const CmpLess& cmp)
{
size_t down = 0, up = size_t(end - beg);
while (down < up)
{
const size_t mid = down + (up - down) / 2; // Overflow-safe midpoint calculation
if (cmp(*(beg + mid), key))
{
down = mid + 1;
}
else
{
up = mid;
}
}
return beg + down;
}
template<typename CmpLess, typename IterT, typename KeyT>
IterT VmaBinaryFindSorted(const IterT& beg, const IterT& end, const KeyT& value, const CmpLess& cmp)
{
IterT it = VmaBinaryFindFirstNotLess<CmpLess, IterT, KeyT>(
beg, end, value, cmp);
if (it == end ||
(!cmp(*it, value) && !cmp(value, *it)))
{
return it;
}
return end;
}
/*
Returns true if all pointers in the array are not-null and unique.
Warning! O(n^2) complexity. Use only inside VMA_HEAVY_ASSERT.
T must be pointer type, e.g. VmaAllocation, VmaPool.
*/
template<typename T>
static bool VmaValidatePointerArray(uint32_t count, const T* arr)
{
for (uint32_t i = 0; i < count; ++i)
{
const T iPtr = arr[i];
if (iPtr == VMA_NULL)
{
return false;
}
for (uint32_t j = i + 1; j < count; ++j)
{
if (iPtr == arr[j])
{
return false;
}
}
}
return true;
}
template<typename MainT, typename NewT>
static inline void VmaPnextChainPushFront(MainT* mainStruct, NewT* newStruct)
{
newStruct->pNext = mainStruct->pNext;
mainStruct->pNext = newStruct;
}
// This is the main algorithm that guides the selection of a memory type best for an allocation -
// converts usage to required/preferred/not preferred flags.
static bool FindMemoryPreferences(
bool isIntegratedGPU,
const VmaAllocationCreateInfo& allocCreateInfo,
VkFlags bufImgUsage, // VkBufferCreateInfo::usage or VkImageCreateInfo::usage. UINT32_MAX if unknown.
VkMemoryPropertyFlags& outRequiredFlags,
VkMemoryPropertyFlags& outPreferredFlags,
VkMemoryPropertyFlags& outNotPreferredFlags)
{
outRequiredFlags = allocCreateInfo.requiredFlags;
outPreferredFlags = allocCreateInfo.preferredFlags;
outNotPreferredFlags = 0;
switch(allocCreateInfo.usage)
{
case VMA_MEMORY_USAGE_UNKNOWN:
break;
case VMA_MEMORY_USAGE_GPU_ONLY:
if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
{
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
}
break;
case VMA_MEMORY_USAGE_CPU_ONLY:
outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
break;
case VMA_MEMORY_USAGE_CPU_TO_GPU:
outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
{
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
}
break;
case VMA_MEMORY_USAGE_GPU_TO_CPU:
outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
break;
case VMA_MEMORY_USAGE_CPU_COPY:
outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
break;
case VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED:
outRequiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
break;
case VMA_MEMORY_USAGE_AUTO:
case VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE:
case VMA_MEMORY_USAGE_AUTO_PREFER_HOST:
{
if(bufImgUsage == UINT32_MAX)
{
VMA_ASSERT(0 && "VMA_MEMORY_USAGE_AUTO* values can only be used with functions like vmaCreateBuffer, vmaCreateImage so that the details of the created resource are known.");
return false;
}
// This relies on values of VK_IMAGE_USAGE_TRANSFER* being the same VK_BUFFER_IMAGE_TRANSFER*.
const bool deviceAccess = (bufImgUsage & ~(VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT)) != 0;
const bool hostAccessSequentialWrite = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT) != 0;
const bool hostAccessRandom = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) != 0;
const bool hostAccessAllowTransferInstead = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) != 0;
const bool preferDevice = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE;
const bool preferHost = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST;
// CPU random access - e.g. a buffer written to or transferred from GPU to read back on CPU.
if(hostAccessRandom)
{
if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
{
// Nice if it will end up in HOST_VISIBLE, but more importantly prefer DEVICE_LOCAL.
// Omitting HOST_VISIBLE here is intentional.
// In case there is DEVICE_LOCAL | HOST_VISIBLE | HOST_CACHED, it will pick that one.
// Otherwise, this will give same weight to DEVICE_LOCAL as HOST_VISIBLE | HOST_CACHED and select the former if occurs first on the list.
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
}
else
{
// Always CPU memory, cached.
outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
}
}
// CPU sequential write - may be CPU or host-visible GPU memory, uncached and write-combined.
else if(hostAccessSequentialWrite)
{
// Want uncached and write-combined.
outNotPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
{
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
}
else
{
outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
// Direct GPU access, CPU sequential write (e.g. a dynamic uniform buffer updated every frame)
if(deviceAccess)
{
// Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose GPU memory.
if(preferHost)
outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
else
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
}
// GPU no direct access, CPU sequential write (e.g. an upload buffer to be transferred to the GPU)
else
{
// Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose CPU memory.
if(preferDevice)
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
else
outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
}
}
}
// No CPU access
else
{
// if(deviceAccess)
//
// GPU access, no CPU access (e.g. a color attachment image) - prefer GPU memory,
// unless there is a clear preference from the user not to do so.
//
// else:
//
// No direct GPU access, no CPU access, just transfers.
// It may be staging copy intended for e.g. preserving image for next frame (then better GPU memory) or
// a "swap file" copy to free some GPU memory (then better CPU memory).
// Up to the user to decide. If no preferece, assume the former and choose GPU memory.
if(preferHost)
outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
else
outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
}
break;
}
default:
VMA_ASSERT(0);
}
// Avoid DEVICE_COHERENT unless explicitly requested.
if(((allocCreateInfo.requiredFlags | allocCreateInfo.preferredFlags) &
(VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
{
outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY;
}
return true;
}
////////////////////////////////////////////////////////////////////////////////
// Memory allocation
static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
{
void* result = VMA_NULL;
if ((pAllocationCallbacks != VMA_NULL) &&
(pAllocationCallbacks->pfnAllocation != VMA_NULL))
{
result = (*pAllocationCallbacks->pfnAllocation)(
pAllocationCallbacks->pUserData,
size,
alignment,
VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
}
else
{
result = VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
}
VMA_ASSERT(result != VMA_NULL && "CPU memory allocation failed.");
return result;
}
static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
{
if ((pAllocationCallbacks != VMA_NULL) &&
(pAllocationCallbacks->pfnFree != VMA_NULL))
{
(*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
}
else
{
VMA_SYSTEM_ALIGNED_FREE(ptr);
}
}
template<typename T>
static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
{
return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
}
template<typename T>
static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
{
return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
}
#define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
#define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
template<typename T>
static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
{
ptr->~T();
VmaFree(pAllocationCallbacks, ptr);
}
template<typename T>
static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
{
if (ptr != VMA_NULL)
{
for (size_t i = count; i--; )
{
ptr[i].~T();
}
VmaFree(pAllocationCallbacks, ptr);
}
}
static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr)
{
if (srcStr != VMA_NULL)
{
const size_t len = strlen(srcStr);
char* const result = vma_new_array(allocs, char, len + 1);
memcpy(result, srcStr, len + 1);
return result;
}
return VMA_NULL;
}
#if VMA_STATS_STRING_ENABLED
static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr, size_t strLen)
{
if (srcStr != VMA_NULL)
{
char* const result = vma_new_array(allocs, char, strLen + 1);
memcpy(result, srcStr, strLen);
result[strLen] = '\0';
return result;
}
return VMA_NULL;
}
#endif // VMA_STATS_STRING_ENABLED
static void VmaFreeString(const VkAllocationCallbacks* allocs, char* str)
{
if (str != VMA_NULL)
{
const size_t len = strlen(str);
vma_delete_array(allocs, str, len + 1);
}
}
template<typename CmpLess, typename VectorT>
size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
{
const size_t indexToInsert = VmaBinaryFindFirstNotLess(
vector.data(),
vector.data() + vector.size(),
value,
CmpLess()) - vector.data();
VmaVectorInsert(vector, indexToInsert, value);
return indexToInsert;
}
template<typename CmpLess, typename VectorT>
bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
{
CmpLess comparator;
typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
vector.begin(),
vector.end(),
value,
comparator);
if ((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
{
size_t indexToRemove = it - vector.begin();
VmaVectorRemove(vector, indexToRemove);
return true;
}
return false;
}
#endif // _VMA_FUNCTIONS
#ifndef _VMA_STATISTICS_FUNCTIONS
static void VmaClearStatistics(VmaStatistics& outStats)
{
outStats.blockCount = 0;
outStats.allocationCount = 0;
outStats.blockBytes = 0;
outStats.allocationBytes = 0;
}
static void VmaAddStatistics(VmaStatistics& inoutStats, const VmaStatistics& src)
{
inoutStats.blockCount += src.blockCount;
inoutStats.allocationCount += src.allocationCount;
inoutStats.blockBytes += src.blockBytes;
inoutStats.allocationBytes += src.allocationBytes;
}
static void VmaClearDetailedStatistics(VmaDetailedStatistics& outStats)
{
VmaClearStatistics(outStats.statistics);
outStats.unusedRangeCount = 0;
outStats.allocationSizeMin = VK_WHOLE_SIZE;
outStats.allocationSizeMax = 0;
outStats.unusedRangeSizeMin = VK_WHOLE_SIZE;
outStats.unusedRangeSizeMax = 0;
}
static void VmaAddDetailedStatisticsAllocation(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
{
inoutStats.statistics.allocationCount++;
inoutStats.statistics.allocationBytes += size;
inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, size);
inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, size);
}
static void VmaAddDetailedStatisticsUnusedRange(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
{
inoutStats.unusedRangeCount++;
inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, size);
inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, size);
}
static void VmaAddDetailedStatistics(VmaDetailedStatistics& inoutStats, const VmaDetailedStatistics& src)
{
VmaAddStatistics(inoutStats.statistics, src.statistics);
inoutStats.unusedRangeCount += src.unusedRangeCount;
inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, src.allocationSizeMin);
inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, src.allocationSizeMax);
inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, src.unusedRangeSizeMin);
inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, src.unusedRangeSizeMax);
}
#endif // _VMA_STATISTICS_FUNCTIONS
#ifndef _VMA_MUTEX_LOCK
// Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
struct VmaMutexLock
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLock)
public:
VmaMutexLock(VMA_MUTEX& mutex, bool useMutex = true) :
m_pMutex(useMutex ? &mutex : VMA_NULL)
{
if (m_pMutex) { m_pMutex->Lock(); }
}
~VmaMutexLock() { if (m_pMutex) { m_pMutex->Unlock(); } }
private:
VMA_MUTEX* m_pMutex;
};
// Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for reading.
struct VmaMutexLockRead
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockRead)
public:
VmaMutexLockRead(VMA_RW_MUTEX& mutex, bool useMutex) :
m_pMutex(useMutex ? &mutex : VMA_NULL)
{
if (m_pMutex) { m_pMutex->LockRead(); }
}
~VmaMutexLockRead() { if (m_pMutex) { m_pMutex->UnlockRead(); } }
private:
VMA_RW_MUTEX* m_pMutex;
};
// Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for writing.
struct VmaMutexLockWrite
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockWrite)
public:
VmaMutexLockWrite(VMA_RW_MUTEX& mutex, bool useMutex)
: m_pMutex(useMutex ? &mutex : VMA_NULL)
{
if (m_pMutex) { m_pMutex->LockWrite(); }
}
~VmaMutexLockWrite() { if (m_pMutex) { m_pMutex->UnlockWrite(); } }
private:
VMA_RW_MUTEX* m_pMutex;
};
#if VMA_DEBUG_GLOBAL_MUTEX
static VMA_MUTEX gDebugGlobalMutex;
#define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
#else
#define VMA_DEBUG_GLOBAL_MUTEX_LOCK
#endif
#endif // _VMA_MUTEX_LOCK
#ifndef _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
// An object that increments given atomic but decrements it back in the destructor unless Commit() is called.
template<typename AtomicT>
struct AtomicTransactionalIncrement
{
public:
using T = decltype(AtomicT().load());
~AtomicTransactionalIncrement()
{
if(m_Atomic)
--(*m_Atomic);
}
void Commit() { m_Atomic = nullptr; }
T Increment(AtomicT* atomic)
{
m_Atomic = atomic;
return m_Atomic->fetch_add(1);
}
private:
AtomicT* m_Atomic = nullptr;
};
#endif // _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
#ifndef _VMA_STL_ALLOCATOR
// STL-compatible allocator.
template<typename T>
struct VmaStlAllocator
{
const VkAllocationCallbacks* const m_pCallbacks;
typedef T value_type;
VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) {}
template<typename U>
VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) {}
VmaStlAllocator(const VmaStlAllocator&) = default;
VmaStlAllocator& operator=(const VmaStlAllocator&) = delete;
T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
template<typename U>
bool operator==(const VmaStlAllocator<U>& rhs) const
{
return m_pCallbacks == rhs.m_pCallbacks;
}
template<typename U>
bool operator!=(const VmaStlAllocator<U>& rhs) const
{
return m_pCallbacks != rhs.m_pCallbacks;
}
};
#endif // _VMA_STL_ALLOCATOR
#ifndef _VMA_VECTOR
/* Class with interface compatible with subset of std::vector.
T must be POD because constructors and destructors are not called and memcpy is
used for these objects. */
template<typename T, typename AllocatorT>
class VmaVector
{
public:
typedef T value_type;
typedef T* iterator;
typedef const T* const_iterator;
VmaVector(const AllocatorT& allocator);
VmaVector(size_t count, const AllocatorT& allocator);
// This version of the constructor is here for compatibility with pre-C++14 std::vector.
// value is unused.
VmaVector(size_t count, const T& value, const AllocatorT& allocator) : VmaVector(count, allocator) {}
VmaVector(const VmaVector<T, AllocatorT>& src);
VmaVector& operator=(const VmaVector& rhs);
~VmaVector() { VmaFree(m_Allocator.m_pCallbacks, m_pArray); }
bool empty() const { return m_Count == 0; }
size_t size() const { return m_Count; }
T* data() { return m_pArray; }
T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
const T* data() const { return m_pArray; }
const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
iterator begin() { return m_pArray; }
iterator end() { return m_pArray + m_Count; }
const_iterator cbegin() const { return m_pArray; }
const_iterator cend() const { return m_pArray + m_Count; }
const_iterator begin() const { return cbegin(); }
const_iterator end() const { return cend(); }
void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
void push_front(const T& src) { insert(0, src); }
void push_back(const T& src);
void reserve(size_t newCapacity, bool freeMemory = false);
void resize(size_t newCount);
void clear() { resize(0); }
void shrink_to_fit();
void insert(size_t index, const T& src);
void remove(size_t index);
T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
private:
AllocatorT m_Allocator;
T* m_pArray;
size_t m_Count;
size_t m_Capacity;
};
#ifndef _VMA_VECTOR_FUNCTIONS
template<typename T, typename AllocatorT>
VmaVector<T, AllocatorT>::VmaVector(const AllocatorT& allocator)
: m_Allocator(allocator),
m_pArray(VMA_NULL),
m_Count(0),
m_Capacity(0) {}
template<typename T, typename AllocatorT>
VmaVector<T, AllocatorT>::VmaVector(size_t count, const AllocatorT& allocator)
: m_Allocator(allocator),
m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
m_Count(count),
m_Capacity(count) {}
template<typename T, typename AllocatorT>
VmaVector<T, AllocatorT>::VmaVector(const VmaVector& src)
: m_Allocator(src.m_Allocator),
m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
m_Count(src.m_Count),
m_Capacity(src.m_Count)
{
if (m_Count != 0)
{
memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
}
}
template<typename T, typename AllocatorT>
VmaVector<T, AllocatorT>& VmaVector<T, AllocatorT>::operator=(const VmaVector& rhs)
{
if (&rhs != this)
{
resize(rhs.m_Count);
if (m_Count != 0)
{
memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
}
}
return *this;
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::push_back(const T& src)
{
const size_t newIndex = size();
resize(newIndex + 1);
m_pArray[newIndex] = src;
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::reserve(size_t newCapacity, bool freeMemory)
{
newCapacity = VMA_MAX(newCapacity, m_Count);
if ((newCapacity < m_Capacity) && !freeMemory)
{
newCapacity = m_Capacity;
}
if (newCapacity != m_Capacity)
{
T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
if (m_Count != 0)
{
memcpy(newArray, m_pArray, m_Count * sizeof(T));
}
VmaFree(m_Allocator.m_pCallbacks, m_pArray);
m_Capacity = newCapacity;
m_pArray = newArray;
}
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::resize(size_t newCount)
{
size_t newCapacity = m_Capacity;
if (newCount > m_Capacity)
{
newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
}
if (newCapacity != m_Capacity)
{
T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
if (elementsToCopy != 0)
{
memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
}
VmaFree(m_Allocator.m_pCallbacks, m_pArray);
m_Capacity = newCapacity;
m_pArray = newArray;
}
m_Count = newCount;
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::shrink_to_fit()
{
if (m_Capacity > m_Count)
{
T* newArray = VMA_NULL;
if (m_Count > 0)
{
newArray = VmaAllocateArray<T>(m_Allocator.m_pCallbacks, m_Count);
memcpy(newArray, m_pArray, m_Count * sizeof(T));
}
VmaFree(m_Allocator.m_pCallbacks, m_pArray);
m_Capacity = m_Count;
m_pArray = newArray;
}
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::insert(size_t index, const T& src)
{
VMA_HEAVY_ASSERT(index <= m_Count);
const size_t oldCount = size();
resize(oldCount + 1);
if (index < oldCount)
{
memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
}
m_pArray[index] = src;
}
template<typename T, typename AllocatorT>
void VmaVector<T, AllocatorT>::remove(size_t index)
{
VMA_HEAVY_ASSERT(index < m_Count);
const size_t oldCount = size();
if (index < oldCount - 1)
{
memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
}
resize(oldCount - 1);
}
#endif // _VMA_VECTOR_FUNCTIONS
template<typename T, typename allocatorT>
static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
{
vec.insert(index, item);
}
template<typename T, typename allocatorT>
static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
{
vec.remove(index);
}
#endif // _VMA_VECTOR
#ifndef _VMA_SMALL_VECTOR
/*
This is a vector (a variable-sized array), optimized for the case when the array is small.
It contains some number of elements in-place, which allows it to avoid heap allocation
when the actual number of elements is below that threshold. This allows normal "small"
cases to be fast without losing generality for large inputs.
*/
template<typename T, typename AllocatorT, size_t N>
class VmaSmallVector
{
public:
typedef T value_type;
typedef T* iterator;
VmaSmallVector(const AllocatorT& allocator);
VmaSmallVector(size_t count, const AllocatorT& allocator);
template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
VmaSmallVector(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
VmaSmallVector<T, AllocatorT, N>& operator=(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
~VmaSmallVector() = default;
bool empty() const { return m_Count == 0; }
size_t size() const { return m_Count; }
T* data() { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
const T* data() const { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
iterator begin() { return data(); }
iterator end() { return data() + m_Count; }
void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
void push_front(const T& src) { insert(0, src); }
void push_back(const T& src);
void resize(size_t newCount, bool freeMemory = false);
void clear(bool freeMemory = false);
void insert(size_t index, const T& src);
void remove(size_t index);
T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
private:
size_t m_Count;
T m_StaticArray[N]; // Used when m_Size <= N
VmaVector<T, AllocatorT> m_DynamicArray; // Used when m_Size > N
};
#ifndef _VMA_SMALL_VECTOR_FUNCTIONS
template<typename T, typename AllocatorT, size_t N>
VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(const AllocatorT& allocator)
: m_Count(0),
m_DynamicArray(allocator) {}
template<typename T, typename AllocatorT, size_t N>
VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(size_t count, const AllocatorT& allocator)
: m_Count(count),
m_DynamicArray(count > N ? count : 0, allocator) {}
template<typename T, typename AllocatorT, size_t N>
void VmaSmallVector<T, AllocatorT, N>::push_back(const T& src)
{
const size_t newIndex = size();
resize(newIndex + 1);
data()[newIndex] = src;
}
template<typename T, typename AllocatorT, size_t N>
void VmaSmallVector<T, AllocatorT, N>::resize(size_t newCount, bool freeMemory)
{
if (newCount > N && m_Count > N)
{
// Any direction, staying in m_DynamicArray
m_DynamicArray.resize(newCount);
if (freeMemory)
{
m_DynamicArray.shrink_to_fit();
}
}
else if (newCount > N && m_Count <= N)
{
// Growing, moving from m_StaticArray to m_DynamicArray
m_DynamicArray.resize(newCount);
if (m_Count > 0)
{
memcpy(m_DynamicArray.data(), m_StaticArray, m_Count * sizeof(T));
}
}
else if (newCount <= N && m_Count > N)
{
// Shrinking, moving from m_DynamicArray to m_StaticArray
if (newCount > 0)
{
memcpy(m_StaticArray, m_DynamicArray.data(), newCount * sizeof(T));
}
m_DynamicArray.resize(0);
if (freeMemory)
{
m_DynamicArray.shrink_to_fit();
}
}
else
{
// Any direction, staying in m_StaticArray - nothing to do here
}
m_Count = newCount;
}
template<typename T, typename AllocatorT, size_t N>
void VmaSmallVector<T, AllocatorT, N>::clear(bool freeMemory)
{
m_DynamicArray.clear();
if (freeMemory)
{
m_DynamicArray.shrink_to_fit();
}
m_Count = 0;
}
template<typename T, typename AllocatorT, size_t N>
void VmaSmallVector<T, AllocatorT, N>::insert(size_t index, const T& src)
{
VMA_HEAVY_ASSERT(index <= m_Count);
const size_t oldCount = size();
resize(oldCount + 1);
T* const dataPtr = data();
if (index < oldCount)
{
// I know, this could be more optimal for case where memmove can be memcpy directly from m_StaticArray to m_DynamicArray.
memmove(dataPtr + (index + 1), dataPtr + index, (oldCount - index) * sizeof(T));
}
dataPtr[index] = src;
}
template<typename T, typename AllocatorT, size_t N>
void VmaSmallVector<T, AllocatorT, N>::remove(size_t index)
{
VMA_HEAVY_ASSERT(index < m_Count);
const size_t oldCount = size();
if (index < oldCount - 1)
{
// I know, this could be more optimal for case where memmove can be memcpy directly from m_DynamicArray to m_StaticArray.
T* const dataPtr = data();
memmove(dataPtr + index, dataPtr + (index + 1), (oldCount - index - 1) * sizeof(T));
}
resize(oldCount - 1);
}
#endif // _VMA_SMALL_VECTOR_FUNCTIONS
#endif // _VMA_SMALL_VECTOR
#ifndef _VMA_POOL_ALLOCATOR
/*
Allocator for objects of type T using a list of arrays (pools) to speed up
allocation. Number of elements that can be allocated is not bounded because
allocator can create multiple blocks.
*/
template<typename T>
class VmaPoolAllocator
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaPoolAllocator)
public:
VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity);
~VmaPoolAllocator();
template<typename... Types> T* Alloc(Types&&... args);
void Free(T* ptr);
private:
union Item
{
uint32_t NextFreeIndex;
alignas(T) char Value[sizeof(T)];
};
struct ItemBlock
{
Item* pItems;
uint32_t Capacity;
uint32_t FirstFreeIndex;
};
const VkAllocationCallbacks* m_pAllocationCallbacks;
const uint32_t m_FirstBlockCapacity;
VmaVector<ItemBlock, VmaStlAllocator<ItemBlock>> m_ItemBlocks;
ItemBlock& CreateNewBlock();
};
#ifndef _VMA_POOL_ALLOCATOR_FUNCTIONS
template<typename T>
VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity)
: m_pAllocationCallbacks(pAllocationCallbacks),
m_FirstBlockCapacity(firstBlockCapacity),
m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
{
VMA_ASSERT(m_FirstBlockCapacity > 1);
}
template<typename T>
VmaPoolAllocator<T>::~VmaPoolAllocator()
{
for (size_t i = m_ItemBlocks.size(); i--;)
vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemBlocks[i].Capacity);
m_ItemBlocks.clear();
}
template<typename T>
template<typename... Types> T* VmaPoolAllocator<T>::Alloc(Types&&... args)
{
for (size_t i = m_ItemBlocks.size(); i--; )
{
ItemBlock& block = m_ItemBlocks[i];
// This block has some free items: Use first one.
if (block.FirstFreeIndex != UINT32_MAX)
{
Item* const pItem = &block.pItems[block.FirstFreeIndex];
block.FirstFreeIndex = pItem->NextFreeIndex;
T* result = (T*)&pItem->Value;
new(result)T(std::forward<Types>(args)...); // Explicit constructor call.
return result;
}
}
// No block has free item: Create new one and use it.
ItemBlock& newBlock = CreateNewBlock();
Item* const pItem = &newBlock.pItems[0];
newBlock.FirstFreeIndex = pItem->NextFreeIndex;
T* result = (T*)&pItem->Value;
new(result) T(std::forward<Types>(args)...); // Explicit constructor call.
return result;
}
template<typename T>
void VmaPoolAllocator<T>::Free(T* ptr)
{
// Search all memory blocks to find ptr.
for (size_t i = m_ItemBlocks.size(); i--; )
{
ItemBlock& block = m_ItemBlocks[i];
// Casting to union.
Item* pItemPtr;
memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
// Check if pItemPtr is in address range of this block.
if ((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + block.Capacity))
{
ptr->~T(); // Explicit destructor call.
const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
pItemPtr->NextFreeIndex = block.FirstFreeIndex;
block.FirstFreeIndex = index;
return;
}
}
VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
}
template<typename T>
typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
{
const uint32_t newBlockCapacity = m_ItemBlocks.empty() ?
m_FirstBlockCapacity : m_ItemBlocks.back().Capacity * 3 / 2;
const ItemBlock newBlock =
{
vma_new_array(m_pAllocationCallbacks, Item, newBlockCapacity),
newBlockCapacity,
0
};
m_ItemBlocks.push_back(newBlock);
// Setup singly-linked list of all free items in this block.
for (uint32_t i = 0; i < newBlockCapacity - 1; ++i)
newBlock.pItems[i].NextFreeIndex = i + 1;
newBlock.pItems[newBlockCapacity - 1].NextFreeIndex = UINT32_MAX;
return m_ItemBlocks.back();
}
#endif // _VMA_POOL_ALLOCATOR_FUNCTIONS
#endif // _VMA_POOL_ALLOCATOR
#ifndef _VMA_RAW_LIST
template<typename T>
struct VmaListItem
{
VmaListItem* pPrev;
VmaListItem* pNext;
T Value;
};
// Doubly linked list.
template<typename T>
class VmaRawList
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaRawList)
public:
typedef VmaListItem<T> ItemType;
VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
// Intentionally not calling Clear, because that would be unnecessary
// computations to return all items to m_ItemAllocator as free.
~VmaRawList() = default;
size_t GetCount() const { return m_Count; }
bool IsEmpty() const { return m_Count == 0; }
ItemType* Front() { return m_pFront; }
ItemType* Back() { return m_pBack; }
const ItemType* Front() const { return m_pFront; }
const ItemType* Back() const { return m_pBack; }
ItemType* PushFront();
ItemType* PushBack();
ItemType* PushFront(const T& value);
ItemType* PushBack(const T& value);
void PopFront();
void PopBack();
// Item can be null - it means PushBack.
ItemType* InsertBefore(ItemType* pItem);
// Item can be null - it means PushFront.
ItemType* InsertAfter(ItemType* pItem);
ItemType* InsertBefore(ItemType* pItem, const T& value);
ItemType* InsertAfter(ItemType* pItem, const T& value);
void Clear();
void Remove(ItemType* pItem);
private:
const VkAllocationCallbacks* const m_pAllocationCallbacks;
VmaPoolAllocator<ItemType> m_ItemAllocator;
ItemType* m_pFront;
ItemType* m_pBack;
size_t m_Count;
};
#ifndef _VMA_RAW_LIST_FUNCTIONS
template<typename T>
VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks)
: m_pAllocationCallbacks(pAllocationCallbacks),
m_ItemAllocator(pAllocationCallbacks, 128),
m_pFront(VMA_NULL),
m_pBack(VMA_NULL),
m_Count(0) {}
template<typename T>
VmaListItem<T>* VmaRawList<T>::PushFront()
{
ItemType* const pNewItem = m_ItemAllocator.Alloc();
pNewItem->pPrev = VMA_NULL;
if (IsEmpty())
{
pNewItem->pNext = VMA_NULL;
m_pFront = pNewItem;
m_pBack = pNewItem;
m_Count = 1;
}
else
{
pNewItem->pNext = m_pFront;
m_pFront->pPrev = pNewItem;
m_pFront = pNewItem;
++m_Count;
}
return pNewItem;
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::PushBack()
{
ItemType* const pNewItem = m_ItemAllocator.Alloc();
pNewItem->pNext = VMA_NULL;
if(IsEmpty())
{
pNewItem->pPrev = VMA_NULL;
m_pFront = pNewItem;
m_pBack = pNewItem;
m_Count = 1;
}
else
{
pNewItem->pPrev = m_pBack;
m_pBack->pNext = pNewItem;
m_pBack = pNewItem;
++m_Count;
}
return pNewItem;
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
{
ItemType* const pNewItem = PushFront();
pNewItem->Value = value;
return pNewItem;
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
{
ItemType* const pNewItem = PushBack();
pNewItem->Value = value;
return pNewItem;
}
template<typename T>
void VmaRawList<T>::PopFront()
{
VMA_HEAVY_ASSERT(m_Count > 0);
ItemType* const pFrontItem = m_pFront;
ItemType* const pNextItem = pFrontItem->pNext;
if (pNextItem != VMA_NULL)
{
pNextItem->pPrev = VMA_NULL;
}
m_pFront = pNextItem;
m_ItemAllocator.Free(pFrontItem);
--m_Count;
}
template<typename T>
void VmaRawList<T>::PopBack()
{
VMA_HEAVY_ASSERT(m_Count > 0);
ItemType* const pBackItem = m_pBack;
ItemType* const pPrevItem = pBackItem->pPrev;
if(pPrevItem != VMA_NULL)
{
pPrevItem->pNext = VMA_NULL;
}
m_pBack = pPrevItem;
m_ItemAllocator.Free(pBackItem);
--m_Count;
}
template<typename T>
void VmaRawList<T>::Clear()
{
if (IsEmpty() == false)
{
ItemType* pItem = m_pBack;
while (pItem != VMA_NULL)
{
ItemType* const pPrevItem = pItem->pPrev;
m_ItemAllocator.Free(pItem);
pItem = pPrevItem;
}
m_pFront = VMA_NULL;
m_pBack = VMA_NULL;
m_Count = 0;
}
}
template<typename T>
void VmaRawList<T>::Remove(ItemType* pItem)
{
VMA_HEAVY_ASSERT(pItem != VMA_NULL);
VMA_HEAVY_ASSERT(m_Count > 0);
if(pItem->pPrev != VMA_NULL)
{
pItem->pPrev->pNext = pItem->pNext;
}
else
{
VMA_HEAVY_ASSERT(m_pFront == pItem);
m_pFront = pItem->pNext;
}
if(pItem->pNext != VMA_NULL)
{
pItem->pNext->pPrev = pItem->pPrev;
}
else
{
VMA_HEAVY_ASSERT(m_pBack == pItem);
m_pBack = pItem->pPrev;
}
m_ItemAllocator.Free(pItem);
--m_Count;
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
{
if(pItem != VMA_NULL)
{
ItemType* const prevItem = pItem->pPrev;
ItemType* const newItem = m_ItemAllocator.Alloc();
newItem->pPrev = prevItem;
newItem->pNext = pItem;
pItem->pPrev = newItem;
if(prevItem != VMA_NULL)
{
prevItem->pNext = newItem;
}
else
{
VMA_HEAVY_ASSERT(m_pFront == pItem);
m_pFront = newItem;
}
++m_Count;
return newItem;
}
else
return PushBack();
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
{
if(pItem != VMA_NULL)
{
ItemType* const nextItem = pItem->pNext;
ItemType* const newItem = m_ItemAllocator.Alloc();
newItem->pNext = nextItem;
newItem->pPrev = pItem;
pItem->pNext = newItem;
if(nextItem != VMA_NULL)
{
nextItem->pPrev = newItem;
}
else
{
VMA_HEAVY_ASSERT(m_pBack == pItem);
m_pBack = newItem;
}
++m_Count;
return newItem;
}
else
return PushFront();
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
{
ItemType* const newItem = InsertBefore(pItem);
newItem->Value = value;
return newItem;
}
template<typename T>
VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
{
ItemType* const newItem = InsertAfter(pItem);
newItem->Value = value;
return newItem;
}
#endif // _VMA_RAW_LIST_FUNCTIONS
#endif // _VMA_RAW_LIST
#ifndef _VMA_LIST
template<typename T, typename AllocatorT>
class VmaList
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaList)
public:
class reverse_iterator;
class const_iterator;
class const_reverse_iterator;
class iterator
{
friend class const_iterator;
friend class VmaList<T, AllocatorT>;
public:
iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
bool operator==(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
bool operator!=(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
iterator operator++(int) { iterator result = *this; ++*this; return result; }
iterator operator--(int) { iterator result = *this; --*this; return result; }
iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
iterator& operator--();
private:
VmaRawList<T>* m_pList;
VmaListItem<T>* m_pItem;
iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
};
class reverse_iterator
{
friend class const_reverse_iterator;
friend class VmaList<T, AllocatorT>;
public:
reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
bool operator==(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
bool operator!=(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
reverse_iterator operator++(int) { reverse_iterator result = *this; ++* this; return result; }
reverse_iterator operator--(int) { reverse_iterator result = *this; --* this; return result; }
reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
reverse_iterator& operator--();
private:
VmaRawList<T>* m_pList;
VmaListItem<T>* m_pItem;
reverse_iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
};
class const_iterator
{
friend class VmaList<T, AllocatorT>;
public:
const_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
const_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
const_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
bool operator==(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
bool operator!=(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
const_iterator operator++(int) { const_iterator result = *this; ++* this; return result; }
const_iterator operator--(int) { const_iterator result = *this; --* this; return result; }
const_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
const_iterator& operator--();
private:
const VmaRawList<T>* m_pList;
const VmaListItem<T>* m_pItem;
const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
};
class const_reverse_iterator
{
friend class VmaList<T, AllocatorT>;
public:
const_reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
const_reverse_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
const_reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
reverse_iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
bool operator==(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
bool operator!=(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
const_reverse_iterator operator++(int) { const_reverse_iterator result = *this; ++* this; return result; }
const_reverse_iterator operator--(int) { const_reverse_iterator result = *this; --* this; return result; }
const_reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
const_reverse_iterator& operator--();
private:
const VmaRawList<T>* m_pList;
const VmaListItem<T>* m_pItem;
const_reverse_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
};
VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) {}
bool empty() const { return m_RawList.IsEmpty(); }
size_t size() const { return m_RawList.GetCount(); }
iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
iterator end() { return iterator(&m_RawList, VMA_NULL); }
const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
const_iterator begin() const { return cbegin(); }
const_iterator end() const { return cend(); }
reverse_iterator rbegin() { return reverse_iterator(&m_RawList, m_RawList.Back()); }
reverse_iterator rend() { return reverse_iterator(&m_RawList, VMA_NULL); }
const_reverse_iterator crbegin() const { return const_reverse_iterator(&m_RawList, m_RawList.Back()); }
const_reverse_iterator crend() const { return const_reverse_iterator(&m_RawList, VMA_NULL); }
const_reverse_iterator rbegin() const { return crbegin(); }
const_reverse_iterator rend() const { return crend(); }
void push_back(const T& value) { m_RawList.PushBack(value); }
iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
void clear() { m_RawList.Clear(); }
void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
private:
VmaRawList<T> m_RawList;
};
#ifndef _VMA_LIST_FUNCTIONS
template<typename T, typename AllocatorT>
typename VmaList<T, AllocatorT>::iterator& VmaList<T, AllocatorT>::iterator::operator--()
{
if (m_pItem != VMA_NULL)
{
m_pItem = m_pItem->pPrev;
}
else
{
VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
m_pItem = m_pList->Back();
}
return *this;
}
template<typename T, typename AllocatorT>
typename VmaList<T, AllocatorT>::reverse_iterator& VmaList<T, AllocatorT>::reverse_iterator::operator--()
{
if (m_pItem != VMA_NULL)
{
m_pItem = m_pItem->pNext;
}
else
{
VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
m_pItem = m_pList->Front();
}
return *this;
}
template<typename T, typename AllocatorT>
typename VmaList<T, AllocatorT>::const_iterator& VmaList<T, AllocatorT>::const_iterator::operator--()
{
if (m_pItem != VMA_NULL)
{
m_pItem = m_pItem->pPrev;
}
else
{
VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
m_pItem = m_pList->Back();
}
return *this;
}
template<typename T, typename AllocatorT>
typename VmaList<T, AllocatorT>::const_reverse_iterator& VmaList<T, AllocatorT>::const_reverse_iterator::operator--()
{
if (m_pItem != VMA_NULL)
{
m_pItem = m_pItem->pNext;
}
else
{
VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
m_pItem = m_pList->Back();
}
return *this;
}
#endif // _VMA_LIST_FUNCTIONS
#endif // _VMA_LIST
#ifndef _VMA_INTRUSIVE_LINKED_LIST
/*
Expected interface of ItemTypeTraits:
struct MyItemTypeTraits
{
typedef MyItem ItemType;
static ItemType* GetPrev(const ItemType* item) { return item->myPrevPtr; }
static ItemType* GetNext(const ItemType* item) { return item->myNextPtr; }
static ItemType*& AccessPrev(ItemType* item) { return item->myPrevPtr; }
static ItemType*& AccessNext(ItemType* item) { return item->myNextPtr; }
};
*/
template<typename ItemTypeTraits>
class VmaIntrusiveLinkedList
{
public:
typedef typename ItemTypeTraits::ItemType ItemType;
static ItemType* GetPrev(const ItemType* item) { return ItemTypeTraits::GetPrev(item); }
static ItemType* GetNext(const ItemType* item) { return ItemTypeTraits::GetNext(item); }
// Movable, not copyable.
VmaIntrusiveLinkedList() = default;
VmaIntrusiveLinkedList(VmaIntrusiveLinkedList && src);
VmaIntrusiveLinkedList(const VmaIntrusiveLinkedList&) = delete;
VmaIntrusiveLinkedList& operator=(VmaIntrusiveLinkedList&& src);
VmaIntrusiveLinkedList& operator=(const VmaIntrusiveLinkedList&) = delete;
~VmaIntrusiveLinkedList() { VMA_HEAVY_ASSERT(IsEmpty()); }
size_t GetCount() const { return m_Count; }
bool IsEmpty() const { return m_Count == 0; }
ItemType* Front() { return m_Front; }
ItemType* Back() { return m_Back; }
const ItemType* Front() const { return m_Front; }
const ItemType* Back() const { return m_Back; }
void PushBack(ItemType* item);
void PushFront(ItemType* item);
ItemType* PopBack();
ItemType* PopFront();
// MyItem can be null - it means PushBack.
void InsertBefore(ItemType* existingItem, ItemType* newItem);
// MyItem can be null - it means PushFront.
void InsertAfter(ItemType* existingItem, ItemType* newItem);
void Remove(ItemType* item);
void RemoveAll();
private:
ItemType* m_Front = VMA_NULL;
ItemType* m_Back = VMA_NULL;
size_t m_Count = 0;
};
#ifndef _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
template<typename ItemTypeTraits>
VmaIntrusiveLinkedList<ItemTypeTraits>::VmaIntrusiveLinkedList(VmaIntrusiveLinkedList&& src)
: m_Front(src.m_Front), m_Back(src.m_Back), m_Count(src.m_Count)
{
src.m_Front = src.m_Back = VMA_NULL;
src.m_Count = 0;
}
template<typename ItemTypeTraits>
VmaIntrusiveLinkedList<ItemTypeTraits>& VmaIntrusiveLinkedList<ItemTypeTraits>::operator=(VmaIntrusiveLinkedList&& src)
{
if (&src != this)
{
VMA_HEAVY_ASSERT(IsEmpty());
m_Front = src.m_Front;
m_Back = src.m_Back;
m_Count = src.m_Count;
src.m_Front = src.m_Back = VMA_NULL;
src.m_Count = 0;
}
return *this;
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::PushBack(ItemType* item)
{
VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
if (IsEmpty())
{
m_Front = item;
m_Back = item;
m_Count = 1;
}
else
{
ItemTypeTraits::AccessPrev(item) = m_Back;
ItemTypeTraits::AccessNext(m_Back) = item;
m_Back = item;
++m_Count;
}
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::PushFront(ItemType* item)
{
VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
if (IsEmpty())
{
m_Front = item;
m_Back = item;
m_Count = 1;
}
else
{
ItemTypeTraits::AccessNext(item) = m_Front;
ItemTypeTraits::AccessPrev(m_Front) = item;
m_Front = item;
++m_Count;
}
}
template<typename ItemTypeTraits>
typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopBack()
{
VMA_HEAVY_ASSERT(m_Count > 0);
ItemType* const backItem = m_Back;
ItemType* const prevItem = ItemTypeTraits::GetPrev(backItem);
if (prevItem != VMA_NULL)
{
ItemTypeTraits::AccessNext(prevItem) = VMA_NULL;
}
m_Back = prevItem;
--m_Count;
ItemTypeTraits::AccessPrev(backItem) = VMA_NULL;
ItemTypeTraits::AccessNext(backItem) = VMA_NULL;
return backItem;
}
template<typename ItemTypeTraits>
typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopFront()
{
VMA_HEAVY_ASSERT(m_Count > 0);
ItemType* const frontItem = m_Front;
ItemType* const nextItem = ItemTypeTraits::GetNext(frontItem);
if (nextItem != VMA_NULL)
{
ItemTypeTraits::AccessPrev(nextItem) = VMA_NULL;
}
m_Front = nextItem;
--m_Count;
ItemTypeTraits::AccessPrev(frontItem) = VMA_NULL;
ItemTypeTraits::AccessNext(frontItem) = VMA_NULL;
return frontItem;
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertBefore(ItemType* existingItem, ItemType* newItem)
{
VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
if (existingItem != VMA_NULL)
{
ItemType* const prevItem = ItemTypeTraits::GetPrev(existingItem);
ItemTypeTraits::AccessPrev(newItem) = prevItem;
ItemTypeTraits::AccessNext(newItem) = existingItem;
ItemTypeTraits::AccessPrev(existingItem) = newItem;
if (prevItem != VMA_NULL)
{
ItemTypeTraits::AccessNext(prevItem) = newItem;
}
else
{
VMA_HEAVY_ASSERT(m_Front == existingItem);
m_Front = newItem;
}
++m_Count;
}
else
PushBack(newItem);
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertAfter(ItemType* existingItem, ItemType* newItem)
{
VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
if (existingItem != VMA_NULL)
{
ItemType* const nextItem = ItemTypeTraits::GetNext(existingItem);
ItemTypeTraits::AccessNext(newItem) = nextItem;
ItemTypeTraits::AccessPrev(newItem) = existingItem;
ItemTypeTraits::AccessNext(existingItem) = newItem;
if (nextItem != VMA_NULL)
{
ItemTypeTraits::AccessPrev(nextItem) = newItem;
}
else
{
VMA_HEAVY_ASSERT(m_Back == existingItem);
m_Back = newItem;
}
++m_Count;
}
else
return PushFront(newItem);
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::Remove(ItemType* item)
{
VMA_HEAVY_ASSERT(item != VMA_NULL && m_Count > 0);
if (ItemTypeTraits::GetPrev(item) != VMA_NULL)
{
ItemTypeTraits::AccessNext(ItemTypeTraits::AccessPrev(item)) = ItemTypeTraits::GetNext(item);
}
else
{
VMA_HEAVY_ASSERT(m_Front == item);
m_Front = ItemTypeTraits::GetNext(item);
}
if (ItemTypeTraits::GetNext(item) != VMA_NULL)
{
ItemTypeTraits::AccessPrev(ItemTypeTraits::AccessNext(item)) = ItemTypeTraits::GetPrev(item);
}
else
{
VMA_HEAVY_ASSERT(m_Back == item);
m_Back = ItemTypeTraits::GetPrev(item);
}
ItemTypeTraits::AccessPrev(item) = VMA_NULL;
ItemTypeTraits::AccessNext(item) = VMA_NULL;
--m_Count;
}
template<typename ItemTypeTraits>
void VmaIntrusiveLinkedList<ItemTypeTraits>::RemoveAll()
{
if (!IsEmpty())
{
ItemType* item = m_Back;
while (item != VMA_NULL)
{
ItemType* const prevItem = ItemTypeTraits::AccessPrev(item);
ItemTypeTraits::AccessPrev(item) = VMA_NULL;
ItemTypeTraits::AccessNext(item) = VMA_NULL;
item = prevItem;
}
m_Front = VMA_NULL;
m_Back = VMA_NULL;
m_Count = 0;
}
}
#endif // _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
#endif // _VMA_INTRUSIVE_LINKED_LIST
// Unused in this version.
#if 0
#ifndef _VMA_PAIR
template<typename T1, typename T2>
struct VmaPair
{
T1 first;
T2 second;
VmaPair() : first(), second() {}
VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) {}
};
template<typename FirstT, typename SecondT>
struct VmaPairFirstLess
{
bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
{
return lhs.first < rhs.first;
}
bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
{
return lhs.first < rhsFirst;
}
};
#endif // _VMA_PAIR
#ifndef _VMA_MAP
/* Class compatible with subset of interface of std::unordered_map.
KeyT, ValueT must be POD because they will be stored in VmaVector.
*/
template<typename KeyT, typename ValueT>
class VmaMap
{
public:
typedef VmaPair<KeyT, ValueT> PairType;
typedef PairType* iterator;
VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) {}
iterator begin() { return m_Vector.begin(); }
iterator end() { return m_Vector.end(); }
size_t size() { return m_Vector.size(); }
void insert(const PairType& pair);
iterator find(const KeyT& key);
void erase(iterator it);
private:
VmaVector< PairType, VmaStlAllocator<PairType>> m_Vector;
};
#ifndef _VMA_MAP_FUNCTIONS
template<typename KeyT, typename ValueT>
void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
{
const size_t indexToInsert = VmaBinaryFindFirstNotLess(
m_Vector.data(),
m_Vector.data() + m_Vector.size(),
pair,
VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
VmaVectorInsert(m_Vector, indexToInsert, pair);
}
template<typename KeyT, typename ValueT>
VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
{
PairType* it = VmaBinaryFindFirstNotLess(
m_Vector.data(),
m_Vector.data() + m_Vector.size(),
key,
VmaPairFirstLess<KeyT, ValueT>());
if ((it != m_Vector.end()) && (it->first == key))
{
return it;
}
else
{
return m_Vector.end();
}
}
template<typename KeyT, typename ValueT>
void VmaMap<KeyT, ValueT>::erase(iterator it)
{
VmaVectorRemove(m_Vector, it - m_Vector.begin());
}
#endif // _VMA_MAP_FUNCTIONS
#endif // _VMA_MAP
#endif // #if 0
#if !defined(_VMA_STRING_BUILDER) && VMA_STATS_STRING_ENABLED
class VmaStringBuilder
{
public:
VmaStringBuilder(const VkAllocationCallbacks* allocationCallbacks) : m_Data(VmaStlAllocator<char>(allocationCallbacks)) {}
~VmaStringBuilder() = default;
size_t GetLength() const { return m_Data.size(); }
const char* GetData() const { return m_Data.data(); }
void AddNewLine() { Add('\n'); }
void Add(char ch) { m_Data.push_back(ch); }
void Add(const char* pStr);
void AddNumber(uint32_t num);
void AddNumber(uint64_t num);
void AddPointer(const void* ptr);
private:
VmaVector<char, VmaStlAllocator<char>> m_Data;
};
#ifndef _VMA_STRING_BUILDER_FUNCTIONS
void VmaStringBuilder::Add(const char* pStr)
{
const size_t strLen = strlen(pStr);
if (strLen > 0)
{
const size_t oldCount = m_Data.size();
m_Data.resize(oldCount + strLen);
memcpy(m_Data.data() + oldCount, pStr, strLen);
}
}
void VmaStringBuilder::AddNumber(uint32_t num)
{
char buf[11];
buf[10] = '\0';
char* p = &buf[10];
do
{
*--p = '0' + (char)(num % 10);
num /= 10;
} while (num);
Add(p);
}
void VmaStringBuilder::AddNumber(uint64_t num)
{
char buf[21];
buf[20] = '\0';
char* p = &buf[20];
do
{
*--p = '0' + (char)(num % 10);
num /= 10;
} while (num);
Add(p);
}
void VmaStringBuilder::AddPointer(const void* ptr)
{
char buf[21];
VmaPtrToStr(buf, sizeof(buf), ptr);
Add(buf);
}
#endif //_VMA_STRING_BUILDER_FUNCTIONS
#endif // _VMA_STRING_BUILDER
#if !defined(_VMA_JSON_WRITER) && VMA_STATS_STRING_ENABLED
/*
Allows to conveniently build a correct JSON document to be written to the
VmaStringBuilder passed to the constructor.
*/
class VmaJsonWriter
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaJsonWriter)
public:
// sb - string builder to write the document to. Must remain alive for the whole lifetime of this object.
VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
~VmaJsonWriter();
// Begins object by writing "{".
// Inside an object, you must call pairs of WriteString and a value, e.g.:
// j.BeginObject(true); j.WriteString("A"); j.WriteNumber(1); j.WriteString("B"); j.WriteNumber(2); j.EndObject();
// Will write: { "A": 1, "B": 2 }
void BeginObject(bool singleLine = false);
// Ends object by writing "}".
void EndObject();
// Begins array by writing "[".
// Inside an array, you can write a sequence of any values.
void BeginArray(bool singleLine = false);
// Ends array by writing "[".
void EndArray();
// Writes a string value inside "".
// pStr can contain any ANSI characters, including '"', new line etc. - they will be properly escaped.
void WriteString(const char* pStr);
// Begins writing a string value.
// Call BeginString, ContinueString, ContinueString, ..., EndString instead of
// WriteString to conveniently build the string content incrementally, made of
// parts including numbers.
void BeginString(const char* pStr = VMA_NULL);
// Posts next part of an open string.
void ContinueString(const char* pStr);
// Posts next part of an open string. The number is converted to decimal characters.
void ContinueString(uint32_t n);
void ContinueString(uint64_t n);
// Posts next part of an open string. Pointer value is converted to characters
// using "%p" formatting - shown as hexadecimal number, e.g.: 000000081276Ad00
void ContinueString_Pointer(const void* ptr);
// Ends writing a string value by writing '"'.
void EndString(const char* pStr = VMA_NULL);
// Writes a number value.
void WriteNumber(uint32_t n);
void WriteNumber(uint64_t n);
// Writes a boolean value - false or true.
void WriteBool(bool b);
// Writes a null value.
void WriteNull();
private:
enum COLLECTION_TYPE
{
COLLECTION_TYPE_OBJECT,
COLLECTION_TYPE_ARRAY,
};
struct StackItem
{
COLLECTION_TYPE type;
uint32_t valueCount;
bool singleLineMode;
};
static const char* const INDENT;
VmaStringBuilder& m_SB;
VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
bool m_InsideString;
void BeginValue(bool isString);
void WriteIndent(bool oneLess = false);
};
const char* const VmaJsonWriter::INDENT = " ";
#ifndef _VMA_JSON_WRITER_FUNCTIONS
VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb)
: m_SB(sb),
m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
m_InsideString(false) {}
VmaJsonWriter::~VmaJsonWriter()
{
VMA_ASSERT(!m_InsideString);
VMA_ASSERT(m_Stack.empty());
}
void VmaJsonWriter::BeginObject(bool singleLine)
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.Add('{');
StackItem item;
item.type = COLLECTION_TYPE_OBJECT;
item.valueCount = 0;
item.singleLineMode = singleLine;
m_Stack.push_back(item);
}
void VmaJsonWriter::EndObject()
{
VMA_ASSERT(!m_InsideString);
WriteIndent(true);
m_SB.Add('}');
VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
m_Stack.pop_back();
}
void VmaJsonWriter::BeginArray(bool singleLine)
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.Add('[');
StackItem item;
item.type = COLLECTION_TYPE_ARRAY;
item.valueCount = 0;
item.singleLineMode = singleLine;
m_Stack.push_back(item);
}
void VmaJsonWriter::EndArray()
{
VMA_ASSERT(!m_InsideString);
WriteIndent(true);
m_SB.Add(']');
VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
m_Stack.pop_back();
}
void VmaJsonWriter::WriteString(const char* pStr)
{
BeginString(pStr);
EndString();
}
void VmaJsonWriter::BeginString(const char* pStr)
{
VMA_ASSERT(!m_InsideString);
BeginValue(true);
m_SB.Add('"');
m_InsideString = true;
if (pStr != VMA_NULL && pStr[0] != '\0')
{
ContinueString(pStr);
}
}
void VmaJsonWriter::ContinueString(const char* pStr)
{
VMA_ASSERT(m_InsideString);
const size_t strLen = strlen(pStr);
for (size_t i = 0; i < strLen; ++i)
{
char ch = pStr[i];
if (ch == '\\')
{
m_SB.Add("\\\\");
}
else if (ch == '"')
{
m_SB.Add("\\\"");
}
else if (ch >= 32)
{
m_SB.Add(ch);
}
else switch (ch)
{
case '\b':
m_SB.Add("\\b");
break;
case '\f':
m_SB.Add("\\f");
break;
case '\n':
m_SB.Add("\\n");
break;
case '\r':
m_SB.Add("\\r");
break;
case '\t':
m_SB.Add("\\t");
break;
default:
VMA_ASSERT(0 && "Character not currently supported.");
}
}
}
void VmaJsonWriter::ContinueString(uint32_t n)
{
VMA_ASSERT(m_InsideString);
m_SB.AddNumber(n);
}
void VmaJsonWriter::ContinueString(uint64_t n)
{
VMA_ASSERT(m_InsideString);
m_SB.AddNumber(n);
}
void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
{
VMA_ASSERT(m_InsideString);
m_SB.AddPointer(ptr);
}
void VmaJsonWriter::EndString(const char* pStr)
{
VMA_ASSERT(m_InsideString);
if (pStr != VMA_NULL && pStr[0] != '\0')
{
ContinueString(pStr);
}
m_SB.Add('"');
m_InsideString = false;
}
void VmaJsonWriter::WriteNumber(uint32_t n)
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.AddNumber(n);
}
void VmaJsonWriter::WriteNumber(uint64_t n)
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.AddNumber(n);
}
void VmaJsonWriter::WriteBool(bool b)
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.Add(b ? "true" : "false");
}
void VmaJsonWriter::WriteNull()
{
VMA_ASSERT(!m_InsideString);
BeginValue(false);
m_SB.Add("null");
}
void VmaJsonWriter::BeginValue(bool isString)
{
if (!m_Stack.empty())
{
StackItem& currItem = m_Stack.back();
if (currItem.type == COLLECTION_TYPE_OBJECT &&
currItem.valueCount % 2 == 0)
{
VMA_ASSERT(isString);
}
if (currItem.type == COLLECTION_TYPE_OBJECT &&
currItem.valueCount % 2 != 0)
{
m_SB.Add(": ");
}
else if (currItem.valueCount > 0)
{
m_SB.Add(", ");
WriteIndent();
}
else
{
WriteIndent();
}
++currItem.valueCount;
}
}
void VmaJsonWriter::WriteIndent(bool oneLess)
{
if (!m_Stack.empty() && !m_Stack.back().singleLineMode)
{
m_SB.AddNewLine();
size_t count = m_Stack.size();
if (count > 0 && oneLess)
{
--count;
}
for (size_t i = 0; i < count; ++i)
{
m_SB.Add(INDENT);
}
}
}
#endif // _VMA_JSON_WRITER_FUNCTIONS
static void VmaPrintDetailedStatistics(VmaJsonWriter& json, const VmaDetailedStatistics& stat)
{
json.BeginObject();
json.WriteString("BlockCount");
json.WriteNumber(stat.statistics.blockCount);
json.WriteString("BlockBytes");
json.WriteNumber(stat.statistics.blockBytes);
json.WriteString("AllocationCount");
json.WriteNumber(stat.statistics.allocationCount);
json.WriteString("AllocationBytes");
json.WriteNumber(stat.statistics.allocationBytes);
json.WriteString("UnusedRangeCount");
json.WriteNumber(stat.unusedRangeCount);
if (stat.statistics.allocationCount > 1)
{
json.WriteString("AllocationSizeMin");
json.WriteNumber(stat.allocationSizeMin);
json.WriteString("AllocationSizeMax");
json.WriteNumber(stat.allocationSizeMax);
}
if (stat.unusedRangeCount > 1)
{
json.WriteString("UnusedRangeSizeMin");
json.WriteNumber(stat.unusedRangeSizeMin);
json.WriteString("UnusedRangeSizeMax");
json.WriteNumber(stat.unusedRangeSizeMax);
}
json.EndObject();
}
#endif // _VMA_JSON_WRITER
#ifndef _VMA_MAPPING_HYSTERESIS
class VmaMappingHysteresis
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaMappingHysteresis)
public:
VmaMappingHysteresis() = default;
uint32_t GetExtraMapping() const { return m_ExtraMapping; }
// Call when Map was called.
// Returns true if switched to extra +1 mapping reference count.
bool PostMap()
{
#if VMA_MAPPING_HYSTERESIS_ENABLED
if(m_ExtraMapping == 0)
{
++m_MajorCounter;
if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING)
{
m_ExtraMapping = 1;
m_MajorCounter = 0;
m_MinorCounter = 0;
return true;
}
}
else // m_ExtraMapping == 1
PostMinorCounter();
#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
return false;
}
// Call when Unmap was called.
void PostUnmap()
{
#if VMA_MAPPING_HYSTERESIS_ENABLED
if(m_ExtraMapping == 0)
++m_MajorCounter;
else // m_ExtraMapping == 1
PostMinorCounter();
#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
}
// Call when allocation was made from the memory block.
void PostAlloc()
{
#if VMA_MAPPING_HYSTERESIS_ENABLED
if(m_ExtraMapping == 1)
++m_MajorCounter;
else // m_ExtraMapping == 0
PostMinorCounter();
#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
}
// Call when allocation was freed from the memory block.
// Returns true if switched to extra -1 mapping reference count.
bool PostFree()
{
#if VMA_MAPPING_HYSTERESIS_ENABLED
if(m_ExtraMapping == 1)
{
++m_MajorCounter;
if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING &&
m_MajorCounter > m_MinorCounter + 1)
{
m_ExtraMapping = 0;
m_MajorCounter = 0;
m_MinorCounter = 0;
return true;
}
}
else // m_ExtraMapping == 0
PostMinorCounter();
#endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
return false;
}
private:
static const int32_t COUNTER_MIN_EXTRA_MAPPING = 7;
uint32_t m_MinorCounter = 0;
uint32_t m_MajorCounter = 0;
uint32_t m_ExtraMapping = 0; // 0 or 1.
void PostMinorCounter()
{
if(m_MinorCounter < m_MajorCounter)
{
++m_MinorCounter;
}
else if(m_MajorCounter > 0)
{
--m_MajorCounter;
--m_MinorCounter;
}
}
};
#endif // _VMA_MAPPING_HYSTERESIS
#ifndef _VMA_DEVICE_MEMORY_BLOCK
/*
Represents a single block of device memory (`VkDeviceMemory`) with all the
data about its regions (aka suballocations, #VmaAllocation), assigned and free.
Thread-safety:
- Access to m_pMetadata must be externally synchronized.
- Map, Unmap, Bind* are synchronized internally.
*/
class VmaDeviceMemoryBlock
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaDeviceMemoryBlock)
public:
VmaBlockMetadata* m_pMetadata;
VmaDeviceMemoryBlock(VmaAllocator hAllocator);
~VmaDeviceMemoryBlock();
// Always call after construction.
void Init(
VmaAllocator hAllocator,
VmaPool hParentPool,
uint32_t newMemoryTypeIndex,
VkDeviceMemory newMemory,
VkDeviceSize newSize,
uint32_t id,
uint32_t algorithm,
VkDeviceSize bufferImageGranularity);
// Always call before destruction.
void Destroy(VmaAllocator allocator);
VmaPool GetParentPool() const { return m_hParentPool; }
VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
uint32_t GetId() const { return m_Id; }
void* GetMappedData() const { return m_pMappedData; }
uint32_t GetMapRefCount() const { return m_MapCount; }
// Call when allocation/free was made from m_pMetadata.
// Used for m_MappingHysteresis.
void PostAlloc(VmaAllocator hAllocator);
void PostFree(VmaAllocator hAllocator);
// Validates all data structures inside this object. If not valid, returns false.
bool Validate() const;
VkResult CheckCorruption(VmaAllocator hAllocator);
// ppData can be null.
VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
void Unmap(VmaAllocator hAllocator, uint32_t count);
VkResult WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
VkResult ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
VkResult BindBufferMemory(
const VmaAllocator hAllocator,
const VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkBuffer hBuffer,
const void* pNext);
VkResult BindImageMemory(
const VmaAllocator hAllocator,
const VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkImage hImage,
const void* pNext);
private:
VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
uint32_t m_MemoryTypeIndex;
uint32_t m_Id;
VkDeviceMemory m_hMemory;
/*
Protects access to m_hMemory so it is not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
Also protects m_MapCount, m_pMappedData.
Allocations, deallocations, any change in m_pMetadata is protected by parent's VmaBlockVector::m_Mutex.
*/
VMA_MUTEX m_MapAndBindMutex;
VmaMappingHysteresis m_MappingHysteresis;
uint32_t m_MapCount;
void* m_pMappedData;
};
#endif // _VMA_DEVICE_MEMORY_BLOCK
#ifndef _VMA_ALLOCATION_T
struct VmaAllocation_T
{
friend struct VmaDedicatedAllocationListItemTraits;
enum FLAGS
{
FLAG_PERSISTENT_MAP = 0x01,
FLAG_MAPPING_ALLOWED = 0x02,
};
public:
enum ALLOCATION_TYPE
{
ALLOCATION_TYPE_NONE,
ALLOCATION_TYPE_BLOCK,
ALLOCATION_TYPE_DEDICATED,
};
// This struct is allocated using VmaPoolAllocator.
VmaAllocation_T(bool mappingAllowed);
~VmaAllocation_T();
void InitBlockAllocation(
VmaDeviceMemoryBlock* block,
VmaAllocHandle allocHandle,
VkDeviceSize alignment,
VkDeviceSize size,
uint32_t memoryTypeIndex,
VmaSuballocationType suballocationType,
bool mapped);
// pMappedData not null means allocation is created with MAPPED flag.
void InitDedicatedAllocation(
VmaPool hParentPool,
uint32_t memoryTypeIndex,
VkDeviceMemory hMemory,
VmaSuballocationType suballocationType,
void* pMappedData,
VkDeviceSize size);
ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
VkDeviceSize GetAlignment() const { return m_Alignment; }
VkDeviceSize GetSize() const { return m_Size; }
void* GetUserData() const { return m_pUserData; }
const char* GetName() const { return m_pName; }
VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
VmaDeviceMemoryBlock* GetBlock() const { VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK); return m_BlockAllocation.m_Block; }
uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
bool IsPersistentMap() const { return (m_Flags & FLAG_PERSISTENT_MAP) != 0; }
bool IsMappingAllowed() const { return (m_Flags & FLAG_MAPPING_ALLOWED) != 0; }
void SetUserData(VmaAllocator hAllocator, void* pUserData) { m_pUserData = pUserData; }
void SetName(VmaAllocator hAllocator, const char* pName);
void FreeName(VmaAllocator hAllocator);
uint8_t SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation);
VmaAllocHandle GetAllocHandle() const;
VkDeviceSize GetOffset() const;
VmaPool GetParentPool() const;
VkDeviceMemory GetMemory() const;
void* GetMappedData() const;
void BlockAllocMap();
void BlockAllocUnmap();
VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
void DedicatedAllocUnmap(VmaAllocator hAllocator);
#if VMA_STATS_STRING_ENABLED
uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
void InitBufferImageUsage(uint32_t bufferImageUsage);
void PrintParameters(class VmaJsonWriter& json) const;
#endif
private:
// Allocation out of VmaDeviceMemoryBlock.
struct BlockAllocation
{
VmaDeviceMemoryBlock* m_Block;
VmaAllocHandle m_AllocHandle;
};
// Allocation for an object that has its own private VkDeviceMemory.
struct DedicatedAllocation
{
VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
VkDeviceMemory m_hMemory;
void* m_pMappedData; // Not null means memory is mapped.
VmaAllocation_T* m_Prev;
VmaAllocation_T* m_Next;
};
union
{
// Allocation out of VmaDeviceMemoryBlock.
BlockAllocation m_BlockAllocation;
// Allocation for an object that has its own private VkDeviceMemory.
DedicatedAllocation m_DedicatedAllocation;
};
VkDeviceSize m_Alignment;
VkDeviceSize m_Size;
void* m_pUserData;
char* m_pName;
uint32_t m_MemoryTypeIndex;
uint8_t m_Type; // ALLOCATION_TYPE
uint8_t m_SuballocationType; // VmaSuballocationType
// Reference counter for vmaMapMemory()/vmaUnmapMemory().
uint8_t m_MapCount;
uint8_t m_Flags; // enum FLAGS
#if VMA_STATS_STRING_ENABLED
uint32_t m_BufferImageUsage; // 0 if unknown.
#endif
};
#endif // _VMA_ALLOCATION_T
#ifndef _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
struct VmaDedicatedAllocationListItemTraits
{
typedef VmaAllocation_T ItemType;
static ItemType* GetPrev(const ItemType* item)
{
VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
return item->m_DedicatedAllocation.m_Prev;
}
static ItemType* GetNext(const ItemType* item)
{
VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
return item->m_DedicatedAllocation.m_Next;
}
static ItemType*& AccessPrev(ItemType* item)
{
VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
return item->m_DedicatedAllocation.m_Prev;
}
static ItemType*& AccessNext(ItemType* item)
{
VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
return item->m_DedicatedAllocation.m_Next;
}
};
#endif // _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
#ifndef _VMA_DEDICATED_ALLOCATION_LIST
/*
Stores linked list of VmaAllocation_T objects.
Thread-safe, synchronized internally.
*/
class VmaDedicatedAllocationList
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaDedicatedAllocationList)
public:
VmaDedicatedAllocationList() {}
~VmaDedicatedAllocationList();
void Init(bool useMutex) { m_UseMutex = useMutex; }
bool Validate();
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
void AddStatistics(VmaStatistics& inoutStats);
#if VMA_STATS_STRING_ENABLED
// Writes JSON array with the list of allocations.
void BuildStatsString(VmaJsonWriter& json);
#endif
bool IsEmpty();
void Register(VmaAllocation alloc);
void Unregister(VmaAllocation alloc);
private:
typedef VmaIntrusiveLinkedList<VmaDedicatedAllocationListItemTraits> DedicatedAllocationLinkedList;
bool m_UseMutex = true;
VMA_RW_MUTEX m_Mutex;
DedicatedAllocationLinkedList m_AllocationList;
};
#ifndef _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
VmaDedicatedAllocationList::~VmaDedicatedAllocationList()
{
VMA_HEAVY_ASSERT(Validate());
if (!m_AllocationList.IsEmpty())
{
VMA_ASSERT(false && "Unfreed dedicated allocations found!");
}
}
bool VmaDedicatedAllocationList::Validate()
{
const size_t declaredCount = m_AllocationList.GetCount();
size_t actualCount = 0;
VmaMutexLockRead lock(m_Mutex, m_UseMutex);
for (VmaAllocation alloc = m_AllocationList.Front();
alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
{
++actualCount;
}
VMA_VALIDATE(actualCount == declaredCount);
return true;
}
void VmaDedicatedAllocationList::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
{
for(auto* item = m_AllocationList.Front(); item != nullptr; item = DedicatedAllocationLinkedList::GetNext(item))
{
const VkDeviceSize size = item->GetSize();
inoutStats.statistics.blockCount++;
inoutStats.statistics.blockBytes += size;
VmaAddDetailedStatisticsAllocation(inoutStats, item->GetSize());
}
}
void VmaDedicatedAllocationList::AddStatistics(VmaStatistics& inoutStats)
{
VmaMutexLockRead lock(m_Mutex, m_UseMutex);
const uint32_t allocCount = (uint32_t)m_AllocationList.GetCount();
inoutStats.blockCount += allocCount;
inoutStats.allocationCount += allocCount;
for(auto* item = m_AllocationList.Front(); item != nullptr; item = DedicatedAllocationLinkedList::GetNext(item))
{
const VkDeviceSize size = item->GetSize();
inoutStats.blockBytes += size;
inoutStats.allocationBytes += size;
}
}
#if VMA_STATS_STRING_ENABLED
void VmaDedicatedAllocationList::BuildStatsString(VmaJsonWriter& json)
{
VmaMutexLockRead lock(m_Mutex, m_UseMutex);
json.BeginArray();
for (VmaAllocation alloc = m_AllocationList.Front();
alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
{
json.BeginObject(true);
alloc->PrintParameters(json);
json.EndObject();
}
json.EndArray();
}
#endif // VMA_STATS_STRING_ENABLED
bool VmaDedicatedAllocationList::IsEmpty()
{
VmaMutexLockRead lock(m_Mutex, m_UseMutex);
return m_AllocationList.IsEmpty();
}
void VmaDedicatedAllocationList::Register(VmaAllocation alloc)
{
VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
m_AllocationList.PushBack(alloc);
}
void VmaDedicatedAllocationList::Unregister(VmaAllocation alloc)
{
VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
m_AllocationList.Remove(alloc);
}
#endif // _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
#endif // _VMA_DEDICATED_ALLOCATION_LIST
#ifndef _VMA_SUBALLOCATION
/*
Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
allocated memory block or free.
*/
struct VmaSuballocation
{
VkDeviceSize offset;
VkDeviceSize size;
void* userData;
VmaSuballocationType type;
};
// Comparator for offsets.
struct VmaSuballocationOffsetLess
{
bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
{
return lhs.offset < rhs.offset;
}
};
struct VmaSuballocationOffsetGreater
{
bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
{
return lhs.offset > rhs.offset;
}
};
struct VmaSuballocationItemSizeLess
{
bool operator()(const VmaSuballocationList::iterator lhs,
const VmaSuballocationList::iterator rhs) const
{
return lhs->size < rhs->size;
}
bool operator()(const VmaSuballocationList::iterator lhs,
VkDeviceSize rhsSize) const
{
return lhs->size < rhsSize;
}
};
#endif // _VMA_SUBALLOCATION
#ifndef _VMA_ALLOCATION_REQUEST
/*
Parameters of planned allocation inside a VmaDeviceMemoryBlock.
item points to a FREE suballocation.
*/
struct VmaAllocationRequest
{
VmaAllocHandle allocHandle;
VkDeviceSize size;
VmaSuballocationList::iterator item;
void* customData;
uint64_t algorithmData;
VmaAllocationRequestType type;
};
#endif // _VMA_ALLOCATION_REQUEST
#ifndef _VMA_BLOCK_METADATA
/*
Data structure used for bookkeeping of allocations and unused ranges of memory
in a single VkDeviceMemory block.
*/
class VmaBlockMetadata
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata)
public:
// pAllocationCallbacks, if not null, must be owned externally - alive and unchanged for the whole lifetime of this object.
VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual);
virtual ~VmaBlockMetadata() = default;
virtual void Init(VkDeviceSize size) { m_Size = size; }
bool IsVirtual() const { return m_IsVirtual; }
VkDeviceSize GetSize() const { return m_Size; }
// Validates all data structures inside this object. If not valid, returns false.
virtual bool Validate() const = 0;
virtual size_t GetAllocationCount() const = 0;
virtual size_t GetFreeRegionsCount() const = 0;
virtual VkDeviceSize GetSumFreeSize() const = 0;
// Returns true if this block is empty - contains only single free suballocation.
virtual bool IsEmpty() const = 0;
virtual void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) = 0;
virtual VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const = 0;
virtual void* GetAllocationUserData(VmaAllocHandle allocHandle) const = 0;
virtual VmaAllocHandle GetAllocationListBegin() const = 0;
virtual VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const = 0;
virtual VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const = 0;
// Shouldn't modify blockCount.
virtual void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const = 0;
virtual void AddStatistics(VmaStatistics& inoutStats) const = 0;
#if VMA_STATS_STRING_ENABLED
virtual void PrintDetailedMap(class VmaJsonWriter& json) const = 0;
#endif
// Tries to find a place for suballocation with given parameters inside this block.
// If succeeded, fills pAllocationRequest and returns true.
// If failed, returns false.
virtual bool CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
// Always one of VMA_ALLOCATION_CREATE_STRATEGY_* or VMA_ALLOCATION_INTERNAL_STRATEGY_* flags.
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest) = 0;
virtual VkResult CheckCorruption(const void* pBlockData) = 0;
// Makes actual allocation based on request. Request must already be checked and valid.
virtual void Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData) = 0;
// Frees suballocation assigned to given memory region.
virtual void Free(VmaAllocHandle allocHandle) = 0;
// Frees all allocations.
// Careful! Don't call it if there are VmaAllocation objects owned by userData of cleared allocations!
virtual void Clear() = 0;
virtual void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) = 0;
virtual void DebugLogAllAllocations() const = 0;
protected:
const VkAllocationCallbacks* GetAllocationCallbacks() const { return m_pAllocationCallbacks; }
VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
VkDeviceSize GetDebugMargin() const { return VkDeviceSize(IsVirtual() ? 0 : VMA_DEBUG_MARGIN); }
void DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const;
#if VMA_STATS_STRING_ENABLED
// mapRefCount == UINT32_MAX means unspecified.
void PrintDetailedMap_Begin(class VmaJsonWriter& json,
VkDeviceSize unusedBytes,
size_t allocationCount,
size_t unusedRangeCount) const;
void PrintDetailedMap_Allocation(class VmaJsonWriter& json,
VkDeviceSize offset, VkDeviceSize size, void* userData) const;
void PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
VkDeviceSize offset,
VkDeviceSize size) const;
void PrintDetailedMap_End(class VmaJsonWriter& json) const;
#endif
private:
VkDeviceSize m_Size;
const VkAllocationCallbacks* m_pAllocationCallbacks;
const VkDeviceSize m_BufferImageGranularity;
const bool m_IsVirtual;
};
#ifndef _VMA_BLOCK_METADATA_FUNCTIONS
VmaBlockMetadata::VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual)
: m_Size(0),
m_pAllocationCallbacks(pAllocationCallbacks),
m_BufferImageGranularity(bufferImageGranularity),
m_IsVirtual(isVirtual) {}
void VmaBlockMetadata::DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const
{
if (IsVirtual())
{
VMA_DEBUG_LOG_FORMAT("UNFREED VIRTUAL ALLOCATION; Offset: %llu; Size: %llu; UserData: %p", offset, size, userData);
}
else
{
VMA_ASSERT(userData != VMA_NULL);
VmaAllocation allocation = reinterpret_cast<VmaAllocation>(userData);
userData = allocation->GetUserData();
const char* name = allocation->GetName();
#if VMA_STATS_STRING_ENABLED
VMA_DEBUG_LOG_FORMAT("UNFREED ALLOCATION; Offset: %llu; Size: %llu; UserData: %p; Name: %s; Type: %s; Usage: %u",
offset, size, userData, name ? name : "vma_empty",
VMA_SUBALLOCATION_TYPE_NAMES[allocation->GetSuballocationType()],
allocation->GetBufferImageUsage());
#else
VMA_DEBUG_LOG_FORMAT("UNFREED ALLOCATION; Offset: %llu; Size: %llu; UserData: %p; Name: %s; Type: %u",
offset, size, userData, name ? name : "vma_empty",
(uint32_t)allocation->GetSuballocationType());
#endif // VMA_STATS_STRING_ENABLED
}
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata::PrintDetailedMap_Begin(class VmaJsonWriter& json,
VkDeviceSize unusedBytes, size_t allocationCount, size_t unusedRangeCount) const
{
json.WriteString("TotalBytes");
json.WriteNumber(GetSize());
json.WriteString("UnusedBytes");
json.WriteNumber(unusedBytes);
json.WriteString("Allocations");
json.WriteNumber((uint64_t)allocationCount);
json.WriteString("UnusedRanges");
json.WriteNumber((uint64_t)unusedRangeCount);
json.WriteString("Suballocations");
json.BeginArray();
}
void VmaBlockMetadata::PrintDetailedMap_Allocation(class VmaJsonWriter& json,
VkDeviceSize offset, VkDeviceSize size, void* userData) const
{
json.BeginObject(true);
json.WriteString("Offset");
json.WriteNumber(offset);
if (IsVirtual())
{
json.WriteString("Size");
json.WriteNumber(size);
if (userData)
{
json.WriteString("CustomData");
json.BeginString();
json.ContinueString_Pointer(userData);
json.EndString();
}
}
else
{
((VmaAllocation)userData)->PrintParameters(json);
}
json.EndObject();
}
void VmaBlockMetadata::PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
VkDeviceSize offset, VkDeviceSize size) const
{
json.BeginObject(true);
json.WriteString("Offset");
json.WriteNumber(offset);
json.WriteString("Type");
json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
json.WriteString("Size");
json.WriteNumber(size);
json.EndObject();
}
void VmaBlockMetadata::PrintDetailedMap_End(class VmaJsonWriter& json) const
{
json.EndArray();
}
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_BLOCK_METADATA_FUNCTIONS
#endif // _VMA_BLOCK_METADATA
#ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
// Before deleting object of this class remember to call 'Destroy()'
class VmaBlockBufferImageGranularity final
{
public:
struct ValidationContext
{
const VkAllocationCallbacks* allocCallbacks;
uint16_t* pageAllocs;
};
VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity);
~VmaBlockBufferImageGranularity();
bool IsEnabled() const { return m_BufferImageGranularity > MAX_LOW_BUFFER_IMAGE_GRANULARITY; }
void Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size);
// Before destroying object you must call free it's memory
void Destroy(const VkAllocationCallbacks* pAllocationCallbacks);
void RoundupAllocRequest(VmaSuballocationType allocType,
VkDeviceSize& inOutAllocSize,
VkDeviceSize& inOutAllocAlignment) const;
bool CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
VkDeviceSize allocSize,
VkDeviceSize blockOffset,
VkDeviceSize blockSize,
VmaSuballocationType allocType) const;
void AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size);
void FreePages(VkDeviceSize offset, VkDeviceSize size);
void Clear();
ValidationContext StartValidation(const VkAllocationCallbacks* pAllocationCallbacks,
bool isVirutal) const;
bool Validate(ValidationContext& ctx, VkDeviceSize offset, VkDeviceSize size) const;
bool FinishValidation(ValidationContext& ctx) const;
private:
static const uint16_t MAX_LOW_BUFFER_IMAGE_GRANULARITY = 256;
struct RegionInfo
{
uint8_t allocType;
uint16_t allocCount;
};
VkDeviceSize m_BufferImageGranularity;
uint32_t m_RegionCount;
RegionInfo* m_RegionInfo;
uint32_t GetStartPage(VkDeviceSize offset) const { return OffsetToPageIndex(offset & ~(m_BufferImageGranularity - 1)); }
uint32_t GetEndPage(VkDeviceSize offset, VkDeviceSize size) const { return OffsetToPageIndex((offset + size - 1) & ~(m_BufferImageGranularity - 1)); }
uint32_t OffsetToPageIndex(VkDeviceSize offset) const;
void AllocPage(RegionInfo& page, uint8_t allocType);
};
#ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
VmaBlockBufferImageGranularity::VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity)
: m_BufferImageGranularity(bufferImageGranularity),
m_RegionCount(0),
m_RegionInfo(VMA_NULL) {}
VmaBlockBufferImageGranularity::~VmaBlockBufferImageGranularity()
{
VMA_ASSERT(m_RegionInfo == VMA_NULL && "Free not called before destroying object!");
}
void VmaBlockBufferImageGranularity::Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size)
{
if (IsEnabled())
{
m_RegionCount = static_cast<uint32_t>(VmaDivideRoundingUp(size, m_BufferImageGranularity));
m_RegionInfo = vma_new_array(pAllocationCallbacks, RegionInfo, m_RegionCount);
memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
}
}
void VmaBlockBufferImageGranularity::Destroy(const VkAllocationCallbacks* pAllocationCallbacks)
{
if (m_RegionInfo)
{
vma_delete_array(pAllocationCallbacks, m_RegionInfo, m_RegionCount);
m_RegionInfo = VMA_NULL;
}
}
void VmaBlockBufferImageGranularity::RoundupAllocRequest(VmaSuballocationType allocType,
VkDeviceSize& inOutAllocSize,
VkDeviceSize& inOutAllocAlignment) const
{
if (m_BufferImageGranularity > 1 &&
m_BufferImageGranularity <= MAX_LOW_BUFFER_IMAGE_GRANULARITY)
{
if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
{
inOutAllocAlignment = VMA_MAX(inOutAllocAlignment, m_BufferImageGranularity);
inOutAllocSize = VmaAlignUp(inOutAllocSize, m_BufferImageGranularity);
}
}
}
bool VmaBlockBufferImageGranularity::CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
VkDeviceSize allocSize,
VkDeviceSize blockOffset,
VkDeviceSize blockSize,
VmaSuballocationType allocType) const
{
if (IsEnabled())
{
uint32_t startPage = GetStartPage(inOutAllocOffset);
if (m_RegionInfo[startPage].allocCount > 0 &&
VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[startPage].allocType), allocType))
{
inOutAllocOffset = VmaAlignUp(inOutAllocOffset, m_BufferImageGranularity);
if (blockSize < allocSize + inOutAllocOffset - blockOffset)
return true;
++startPage;
}
uint32_t endPage = GetEndPage(inOutAllocOffset, allocSize);
if (endPage != startPage &&
m_RegionInfo[endPage].allocCount > 0 &&
VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[endPage].allocType), allocType))
{
return true;
}
}
return false;
}
void VmaBlockBufferImageGranularity::AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size)
{
if (IsEnabled())
{
uint32_t startPage = GetStartPage(offset);
AllocPage(m_RegionInfo[startPage], allocType);
uint32_t endPage = GetEndPage(offset, size);
if (startPage != endPage)
AllocPage(m_RegionInfo[endPage], allocType);
}
}
void VmaBlockBufferImageGranularity::FreePages(VkDeviceSize offset, VkDeviceSize size)
{
if (IsEnabled())
{
uint32_t startPage = GetStartPage(offset);
--m_RegionInfo[startPage].allocCount;
if (m_RegionInfo[startPage].allocCount == 0)
m_RegionInfo[startPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
uint32_t endPage = GetEndPage(offset, size);
if (startPage != endPage)
{
--m_RegionInfo[endPage].allocCount;
if (m_RegionInfo[endPage].allocCount == 0)
m_RegionInfo[endPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
}
}
}
void VmaBlockBufferImageGranularity::Clear()
{
if (m_RegionInfo)
memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
}
VmaBlockBufferImageGranularity::ValidationContext VmaBlockBufferImageGranularity::StartValidation(
const VkAllocationCallbacks* pAllocationCallbacks, bool isVirutal) const
{
ValidationContext ctx{ pAllocationCallbacks, VMA_NULL };
if (!isVirutal && IsEnabled())
{
ctx.pageAllocs = vma_new_array(pAllocationCallbacks, uint16_t, m_RegionCount);
memset(ctx.pageAllocs, 0, m_RegionCount * sizeof(uint16_t));
}
return ctx;
}
bool VmaBlockBufferImageGranularity::Validate(ValidationContext& ctx,
VkDeviceSize offset, VkDeviceSize size) const
{
if (IsEnabled())
{
uint32_t start = GetStartPage(offset);
++ctx.pageAllocs[start];
VMA_VALIDATE(m_RegionInfo[start].allocCount > 0);
uint32_t end = GetEndPage(offset, size);
if (start != end)
{
++ctx.pageAllocs[end];
VMA_VALIDATE(m_RegionInfo[end].allocCount > 0);
}
}
return true;
}
bool VmaBlockBufferImageGranularity::FinishValidation(ValidationContext& ctx) const
{
// Check proper page structure
if (IsEnabled())
{
VMA_ASSERT(ctx.pageAllocs != VMA_NULL && "Validation context not initialized!");
for (uint32_t page = 0; page < m_RegionCount; ++page)
{
VMA_VALIDATE(ctx.pageAllocs[page] == m_RegionInfo[page].allocCount);
}
vma_delete_array(ctx.allocCallbacks, ctx.pageAllocs, m_RegionCount);
ctx.pageAllocs = VMA_NULL;
}
return true;
}
uint32_t VmaBlockBufferImageGranularity::OffsetToPageIndex(VkDeviceSize offset) const
{
return static_cast<uint32_t>(offset >> VMA_BITSCAN_MSB(m_BufferImageGranularity));
}
void VmaBlockBufferImageGranularity::AllocPage(RegionInfo& page, uint8_t allocType)
{
// When current alloc type is free then it can be overridden by new type
if (page.allocCount == 0 || (page.allocCount > 0 && page.allocType == VMA_SUBALLOCATION_TYPE_FREE))
page.allocType = allocType;
++page.allocCount;
}
#endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
#endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
#if 0
#ifndef _VMA_BLOCK_METADATA_GENERIC
class VmaBlockMetadata_Generic : public VmaBlockMetadata
{
friend class VmaDefragmentationAlgorithm_Generic;
friend class VmaDefragmentationAlgorithm_Fast;
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_Generic)
public:
VmaBlockMetadata_Generic(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual);
virtual ~VmaBlockMetadata_Generic() = default;
size_t GetAllocationCount() const override { return m_Suballocations.size() - m_FreeCount; }
VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
bool IsEmpty() const override { return (m_Suballocations.size() == 1) && (m_FreeCount == 1); }
void Free(VmaAllocHandle allocHandle) override { FreeSuballocation(FindAtOffset((VkDeviceSize)allocHandle - 1)); }
VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; }
void Init(VkDeviceSize size) override;
bool Validate() const override;
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
void AddStatistics(VmaStatistics& inoutStats) const override;
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const override;
#endif
bool CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest) override;
VkResult CheckCorruption(const void* pBlockData) override;
void Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData) override;
void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
VmaAllocHandle GetAllocationListBegin() const override;
VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
void Clear() override;
void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
void DebugLogAllAllocations() const override;
private:
uint32_t m_FreeCount;
VkDeviceSize m_SumFreeSize;
VmaSuballocationList m_Suballocations;
// Suballocations that are free. Sorted by size, ascending.
VmaVector<VmaSuballocationList::iterator, VmaStlAllocator<VmaSuballocationList::iterator>> m_FreeSuballocationsBySize;
VkDeviceSize AlignAllocationSize(VkDeviceSize size) const { return IsVirtual() ? size : VmaAlignUp(size, (VkDeviceSize)16); }
VmaSuballocationList::iterator FindAtOffset(VkDeviceSize offset) const;
bool ValidateFreeSuballocationList() const;
// Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
// If yes, fills pOffset and returns true. If no, returns false.
bool CheckAllocation(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
VmaSuballocationList::const_iterator suballocItem,
VmaAllocHandle* pAllocHandle) const;
// Given free suballocation, it merges it with following one, which must also be free.
void MergeFreeWithNext(VmaSuballocationList::iterator item);
// Releases given suballocation, making it free.
// Merges it with adjacent free suballocations if applicable.
// Returns iterator to new free suballocation at this place.
VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
// Given free suballocation, it inserts it into sorted list of
// m_FreeSuballocationsBySize if it is suitable.
void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
// Given free suballocation, it removes it from sorted list of
// m_FreeSuballocationsBySize if it is suitable.
void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
};
#ifndef _VMA_BLOCK_METADATA_GENERIC_FUNCTIONS
VmaBlockMetadata_Generic::VmaBlockMetadata_Generic(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual)
: VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
m_FreeCount(0),
m_SumFreeSize(0),
m_Suballocations(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(pAllocationCallbacks)) {}
void VmaBlockMetadata_Generic::Init(VkDeviceSize size)
{
VmaBlockMetadata::Init(size);
m_FreeCount = 1;
m_SumFreeSize = size;
VmaSuballocation suballoc = {};
suballoc.offset = 0;
suballoc.size = size;
suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
m_Suballocations.push_back(suballoc);
m_FreeSuballocationsBySize.push_back(m_Suballocations.begin());
}
bool VmaBlockMetadata_Generic::Validate() const
{
VMA_VALIDATE(!m_Suballocations.empty());
// Expected offset of new suballocation as calculated from previous ones.
VkDeviceSize calculatedOffset = 0;
// Expected number of free suballocations as calculated from traversing their list.
uint32_t calculatedFreeCount = 0;
// Expected sum size of free suballocations as calculated from traversing their list.
VkDeviceSize calculatedSumFreeSize = 0;
// Expected number of free suballocations that should be registered in
// m_FreeSuballocationsBySize calculated from traversing their list.
size_t freeSuballocationsToRegister = 0;
// True if previous visited suballocation was free.
bool prevFree = false;
const VkDeviceSize debugMargin = GetDebugMargin();
for (const auto& subAlloc : m_Suballocations)
{
// Actual offset of this suballocation doesn't match expected one.
VMA_VALIDATE(subAlloc.offset == calculatedOffset);
const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
// Two adjacent free suballocations are invalid. They should be merged.
VMA_VALIDATE(!prevFree || !currFree);
VmaAllocation alloc = (VmaAllocation)subAlloc.userData;
if (!IsVirtual())
{
VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
}
if (currFree)
{
calculatedSumFreeSize += subAlloc.size;
++calculatedFreeCount;
++freeSuballocationsToRegister;
// Margin required between allocations - every free space must be at least that large.
VMA_VALIDATE(subAlloc.size >= debugMargin);
}
else
{
if (!IsVirtual())
{
VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == subAlloc.offset + 1);
VMA_VALIDATE(alloc->GetSize() == subAlloc.size);
}
// Margin required between allocations - previous allocation must be free.
VMA_VALIDATE(debugMargin == 0 || prevFree);
}
calculatedOffset += subAlloc.size;
prevFree = currFree;
}
// Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
// match expected one.
VMA_VALIDATE(m_FreeSuballocationsBySize.size() == freeSuballocationsToRegister);
VkDeviceSize lastSize = 0;
for (size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
{
VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
// Only free suballocations can be registered in m_FreeSuballocationsBySize.
VMA_VALIDATE(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE);
// They must be sorted by size ascending.
VMA_VALIDATE(suballocItem->size >= lastSize);
lastSize = suballocItem->size;
}
// Check if totals match calculated values.
VMA_VALIDATE(ValidateFreeSuballocationList());
VMA_VALIDATE(calculatedOffset == GetSize());
VMA_VALIDATE(calculatedSumFreeSize == m_SumFreeSize);
VMA_VALIDATE(calculatedFreeCount == m_FreeCount);
return true;
}
void VmaBlockMetadata_Generic::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
{
const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
inoutStats.statistics.blockCount++;
inoutStats.statistics.blockBytes += GetSize();
for (const auto& suballoc : m_Suballocations)
{
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
else
VmaAddDetailedStatisticsUnusedRange(inoutStats, suballoc.size);
}
}
void VmaBlockMetadata_Generic::AddStatistics(VmaStatistics& inoutStats) const
{
inoutStats.blockCount++;
inoutStats.allocationCount += (uint32_t)m_Suballocations.size() - m_FreeCount;
inoutStats.blockBytes += GetSize();
inoutStats.allocationBytes += GetSize() - m_SumFreeSize;
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata_Generic::PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const
{
PrintDetailedMap_Begin(json,
m_SumFreeSize, // unusedBytes
m_Suballocations.size() - (size_t)m_FreeCount, // allocationCount
m_FreeCount, // unusedRangeCount
mapRefCount);
for (const auto& suballoc : m_Suballocations)
{
if (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
{
PrintDetailedMap_UnusedRange(json, suballoc.offset, suballoc.size);
}
else
{
PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
}
}
PrintDetailedMap_End(json);
}
#endif // VMA_STATS_STRING_ENABLED
bool VmaBlockMetadata_Generic::CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
VMA_ASSERT(allocSize > 0);
VMA_ASSERT(!upperAddress);
VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
VMA_ASSERT(pAllocationRequest != VMA_NULL);
VMA_HEAVY_ASSERT(Validate());
allocSize = AlignAllocationSize(allocSize);
pAllocationRequest->type = VmaAllocationRequestType::Normal;
pAllocationRequest->size = allocSize;
const VkDeviceSize debugMargin = GetDebugMargin();
// There is not enough total free space in this block to fulfill the request: Early return.
if (m_SumFreeSize < allocSize + debugMargin)
{
return false;
}
// New algorithm, efficiently searching freeSuballocationsBySize.
const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
if (freeSuballocCount > 0)
{
if (strategy == 0 ||
strategy == VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
{
// Find first free suballocation with size not less than allocSize + debugMargin.
VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
m_FreeSuballocationsBySize.data(),
m_FreeSuballocationsBySize.data() + freeSuballocCount,
allocSize + debugMargin,
VmaSuballocationItemSizeLess());
size_t index = it - m_FreeSuballocationsBySize.data();
for (; index < freeSuballocCount; ++index)
{
if (CheckAllocation(
allocSize,
allocAlignment,
allocType,
m_FreeSuballocationsBySize[index],
&pAllocationRequest->allocHandle))
{
pAllocationRequest->item = m_FreeSuballocationsBySize[index];
return true;
}
}
}
else if (strategy == VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET)
{
for (VmaSuballocationList::iterator it = m_Suballocations.begin();
it != m_Suballocations.end();
++it)
{
if (it->type == VMA_SUBALLOCATION_TYPE_FREE && CheckAllocation(
allocSize,
allocAlignment,
allocType,
it,
&pAllocationRequest->allocHandle))
{
pAllocationRequest->item = it;
return true;
}
}
}
else
{
VMA_ASSERT(strategy & (VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT | VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT ));
// Search staring from biggest suballocations.
for (size_t index = freeSuballocCount; index--; )
{
if (CheckAllocation(
allocSize,
allocAlignment,
allocType,
m_FreeSuballocationsBySize[index],
&pAllocationRequest->allocHandle))
{
pAllocationRequest->item = m_FreeSuballocationsBySize[index];
return true;
}
}
}
}
return false;
}
VkResult VmaBlockMetadata_Generic::CheckCorruption(const void* pBlockData)
{
for (auto& suballoc : m_Suballocations)
{
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
{
if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
{
VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
return VK_ERROR_UNKNOWN_COPY;
}
}
}
return VK_SUCCESS;
}
void VmaBlockMetadata_Generic::Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData)
{
VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
VMA_ASSERT(request.item != m_Suballocations.end());
VmaSuballocation& suballoc = *request.item;
// Given suballocation is a free block.
VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
// Given offset is inside this suballocation.
VMA_ASSERT((VkDeviceSize)request.allocHandle - 1 >= suballoc.offset);
const VkDeviceSize paddingBegin = (VkDeviceSize)request.allocHandle - suballoc.offset - 1;
VMA_ASSERT(suballoc.size >= paddingBegin + request.size);
const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - request.size;
// Unregister this free suballocation from m_FreeSuballocationsBySize and update
// it to become used.
UnregisterFreeSuballocation(request.item);
suballoc.offset = (VkDeviceSize)request.allocHandle - 1;
suballoc.size = request.size;
suballoc.type = type;
suballoc.userData = userData;
// If there are any free bytes remaining at the end, insert new free suballocation after current one.
if (paddingEnd)
{
VmaSuballocation paddingSuballoc = {};
paddingSuballoc.offset = suballoc.offset + suballoc.size;
paddingSuballoc.size = paddingEnd;
paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
VmaSuballocationList::iterator next = request.item;
++next;
const VmaSuballocationList::iterator paddingEndItem =
m_Suballocations.insert(next, paddingSuballoc);
RegisterFreeSuballocation(paddingEndItem);
}
// If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
if (paddingBegin)
{
VmaSuballocation paddingSuballoc = {};
paddingSuballoc.offset = suballoc.offset - paddingBegin;
paddingSuballoc.size = paddingBegin;
paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
const VmaSuballocationList::iterator paddingBeginItem =
m_Suballocations.insert(request.item, paddingSuballoc);
RegisterFreeSuballocation(paddingBeginItem);
}
// Update totals.
m_FreeCount = m_FreeCount - 1;
if (paddingBegin > 0)
{
++m_FreeCount;
}
if (paddingEnd > 0)
{
++m_FreeCount;
}
m_SumFreeSize -= request.size;
}
void VmaBlockMetadata_Generic::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
{
outInfo.offset = (VkDeviceSize)allocHandle - 1;
const VmaSuballocation& suballoc = *FindAtOffset(outInfo.offset);
outInfo.size = suballoc.size;
outInfo.pUserData = suballoc.userData;
}
void* VmaBlockMetadata_Generic::GetAllocationUserData(VmaAllocHandle allocHandle) const
{
return FindAtOffset((VkDeviceSize)allocHandle - 1)->userData;
}
VmaAllocHandle VmaBlockMetadata_Generic::GetAllocationListBegin() const
{
if (IsEmpty())
return VK_NULL_HANDLE;
for (const auto& suballoc : m_Suballocations)
{
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
return (VmaAllocHandle)(suballoc.offset + 1);
}
VMA_ASSERT(false && "Should contain at least 1 allocation!");
return VK_NULL_HANDLE;
}
VmaAllocHandle VmaBlockMetadata_Generic::GetNextAllocation(VmaAllocHandle prevAlloc) const
{
VmaSuballocationList::const_iterator prev = FindAtOffset((VkDeviceSize)prevAlloc - 1);
for (VmaSuballocationList::const_iterator it = ++prev; it != m_Suballocations.end(); ++it)
{
if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
return (VmaAllocHandle)(it->offset + 1);
}
return VK_NULL_HANDLE;
}
void VmaBlockMetadata_Generic::Clear()
{
const VkDeviceSize size = GetSize();
VMA_ASSERT(IsVirtual());
m_FreeCount = 1;
m_SumFreeSize = size;
m_Suballocations.clear();
m_FreeSuballocationsBySize.clear();
VmaSuballocation suballoc = {};
suballoc.offset = 0;
suballoc.size = size;
suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
m_Suballocations.push_back(suballoc);
m_FreeSuballocationsBySize.push_back(m_Suballocations.begin());
}
void VmaBlockMetadata_Generic::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
{
VmaSuballocation& suballoc = *FindAtOffset((VkDeviceSize)allocHandle - 1);
suballoc.userData = userData;
}
void VmaBlockMetadata_Generic::DebugLogAllAllocations() const
{
for (const auto& suballoc : m_Suballocations)
{
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
DebugLogAllocation(suballoc.offset, suballoc.size, suballoc.userData);
}
}
VmaSuballocationList::iterator VmaBlockMetadata_Generic::FindAtOffset(VkDeviceSize offset) const
{
VMA_HEAVY_ASSERT(!m_Suballocations.empty());
const VkDeviceSize last = m_Suballocations.rbegin()->offset;
if (last == offset)
return m_Suballocations.rbegin().drop_const();
const VkDeviceSize first = m_Suballocations.begin()->offset;
if (first == offset)
return m_Suballocations.begin().drop_const();
const size_t suballocCount = m_Suballocations.size();
const VkDeviceSize step = (last - first + m_Suballocations.begin()->size) / suballocCount;
auto findSuballocation = [&](auto begin, auto end) -> VmaSuballocationList::iterator
{
for (auto suballocItem = begin;
suballocItem != end;
++suballocItem)
{
if (suballocItem->offset == offset)
return suballocItem.drop_const();
}
VMA_ASSERT(false && "Not found!");
return m_Suballocations.end().drop_const();
};
// If requested offset is closer to the end of range, search from the end
if (offset - first > suballocCount * step / 2)
{
return findSuballocation(m_Suballocations.rbegin(), m_Suballocations.rend());
}
return findSuballocation(m_Suballocations.begin(), m_Suballocations.end());
}
bool VmaBlockMetadata_Generic::ValidateFreeSuballocationList() const
{
VkDeviceSize lastSize = 0;
for (size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
{
const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
VMA_VALIDATE(it->type == VMA_SUBALLOCATION_TYPE_FREE);
VMA_VALIDATE(it->size >= lastSize);
lastSize = it->size;
}
return true;
}
bool VmaBlockMetadata_Generic::CheckAllocation(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
VmaSuballocationList::const_iterator suballocItem,
VmaAllocHandle* pAllocHandle) const
{
VMA_ASSERT(allocSize > 0);
VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
VMA_ASSERT(suballocItem != m_Suballocations.cend());
VMA_ASSERT(pAllocHandle != VMA_NULL);
const VkDeviceSize debugMargin = GetDebugMargin();
const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
const VmaSuballocation& suballoc = *suballocItem;
VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
// Size of this suballocation is too small for this request: Early return.
if (suballoc.size < allocSize)
{
return false;
}
// Start from offset equal to beginning of this suballocation.
VkDeviceSize offset = suballoc.offset + (suballocItem == m_Suballocations.cbegin() ? 0 : GetDebugMargin());
// Apply debugMargin from the end of previous alloc.
if (debugMargin > 0)
{
offset += debugMargin;
}
// Apply alignment.
offset = VmaAlignUp(offset, allocAlignment);
// Check previous suballocations for BufferImageGranularity conflicts.
// Make bigger alignment if necessary.
if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
{
bool bufferImageGranularityConflict = false;
VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
while (prevSuballocItem != m_Suballocations.cbegin())
{
--prevSuballocItem;
const VmaSuballocation& prevSuballoc = *prevSuballocItem;
if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, offset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
{
bufferImageGranularityConflict = true;
break;
}
}
else
// Already on previous page.
break;
}
if (bufferImageGranularityConflict)
{
offset = VmaAlignUp(offset, bufferImageGranularity);
}
}
// Calculate padding at the beginning based on current offset.
const VkDeviceSize paddingBegin = offset - suballoc.offset;
// Fail if requested size plus margin after is bigger than size of this suballocation.
if (paddingBegin + allocSize + debugMargin > suballoc.size)
{
return false;
}
// Check next suballocations for BufferImageGranularity conflicts.
// If conflict exists, allocation cannot be made here.
if (allocSize % bufferImageGranularity || offset % bufferImageGranularity)
{
VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
++nextSuballocItem;
while (nextSuballocItem != m_Suballocations.cend())
{
const VmaSuballocation& nextSuballoc = *nextSuballocItem;
if (VmaBlocksOnSamePage(offset, allocSize, nextSuballoc.offset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
{
return false;
}
}
else
{
// Already on next page.
break;
}
++nextSuballocItem;
}
}
*pAllocHandle = (VmaAllocHandle)(offset + 1);
// All tests passed: Success. pAllocHandle is already filled.
return true;
}
void VmaBlockMetadata_Generic::MergeFreeWithNext(VmaSuballocationList::iterator item)
{
VMA_ASSERT(item != m_Suballocations.end());
VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
VmaSuballocationList::iterator nextItem = item;
++nextItem;
VMA_ASSERT(nextItem != m_Suballocations.end());
VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
item->size += nextItem->size;
--m_FreeCount;
m_Suballocations.erase(nextItem);
}
VmaSuballocationList::iterator VmaBlockMetadata_Generic::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
{
// Change this suballocation to be marked as free.
VmaSuballocation& suballoc = *suballocItem;
suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
suballoc.userData = VMA_NULL;
// Update totals.
++m_FreeCount;
m_SumFreeSize += suballoc.size;
// Merge with previous and/or next suballocation if it's also free.
bool mergeWithNext = false;
bool mergeWithPrev = false;
VmaSuballocationList::iterator nextItem = suballocItem;
++nextItem;
if ((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
{
mergeWithNext = true;
}
VmaSuballocationList::iterator prevItem = suballocItem;
if (suballocItem != m_Suballocations.begin())
{
--prevItem;
if (prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
{
mergeWithPrev = true;
}
}
if (mergeWithNext)
{
UnregisterFreeSuballocation(nextItem);
MergeFreeWithNext(suballocItem);
}
if (mergeWithPrev)
{
UnregisterFreeSuballocation(prevItem);
MergeFreeWithNext(prevItem);
RegisterFreeSuballocation(prevItem);
return prevItem;
}
else
{
RegisterFreeSuballocation(suballocItem);
return suballocItem;
}
}
void VmaBlockMetadata_Generic::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
{
VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
VMA_ASSERT(item->size > 0);
// You may want to enable this validation at the beginning or at the end of
// this function, depending on what do you want to check.
VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
if (m_FreeSuballocationsBySize.empty())
{
m_FreeSuballocationsBySize.push_back(item);
}
else
{
VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
}
//VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
}
void VmaBlockMetadata_Generic::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
{
VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
VMA_ASSERT(item->size > 0);
// You may want to enable this validation at the beginning or at the end of
// this function, depending on what do you want to check.
VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
m_FreeSuballocationsBySize.data(),
m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
item,
VmaSuballocationItemSizeLess());
for (size_t index = it - m_FreeSuballocationsBySize.data();
index < m_FreeSuballocationsBySize.size();
++index)
{
if (m_FreeSuballocationsBySize[index] == item)
{
VmaVectorRemove(m_FreeSuballocationsBySize, index);
return;
}
VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
}
VMA_ASSERT(0 && "Not found.");
//VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
}
#endif // _VMA_BLOCK_METADATA_GENERIC_FUNCTIONS
#endif // _VMA_BLOCK_METADATA_GENERIC
#endif // #if 0
#ifndef _VMA_BLOCK_METADATA_LINEAR
/*
Allocations and their references in internal data structure look like this:
if(m_2ndVectorMode == SECOND_VECTOR_EMPTY):
0 +-------+
| |
| |
| |
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount]
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount + 1]
+-------+
| ... |
+-------+
| Alloc | 1st[1st.size() - 1]
+-------+
| |
| |
| |
GetSize() +-------+
if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER):
0 +-------+
| Alloc | 2nd[0]
+-------+
| Alloc | 2nd[1]
+-------+
| ... |
+-------+
| Alloc | 2nd[2nd.size() - 1]
+-------+
| |
| |
| |
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount]
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount + 1]
+-------+
| ... |
+-------+
| Alloc | 1st[1st.size() - 1]
+-------+
| |
GetSize() +-------+
if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK):
0 +-------+
| |
| |
| |
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount]
+-------+
| Alloc | 1st[m_1stNullItemsBeginCount + 1]
+-------+
| ... |
+-------+
| Alloc | 1st[1st.size() - 1]
+-------+
| |
| |
| |
+-------+
| Alloc | 2nd[2nd.size() - 1]
+-------+
| ... |
+-------+
| Alloc | 2nd[1]
+-------+
| Alloc | 2nd[0]
GetSize() +-------+
*/
class VmaBlockMetadata_Linear : public VmaBlockMetadata
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_Linear)
public:
VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual);
virtual ~VmaBlockMetadata_Linear() = default;
VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
bool IsEmpty() const override { return GetAllocationCount() == 0; }
VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; }
void Init(VkDeviceSize size) override;
bool Validate() const override;
size_t GetAllocationCount() const override;
size_t GetFreeRegionsCount() const override;
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
void AddStatistics(VmaStatistics& inoutStats) const override;
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json) const override;
#endif
bool CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest) override;
VkResult CheckCorruption(const void* pBlockData) override;
void Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData) override;
void Free(VmaAllocHandle allocHandle) override;
void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
VmaAllocHandle GetAllocationListBegin() const override;
VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
void Clear() override;
void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
void DebugLogAllAllocations() const override;
private:
/*
There are two suballocation vectors, used in ping-pong way.
The one with index m_1stVectorIndex is called 1st.
The one with index (m_1stVectorIndex ^ 1) is called 2nd.
2nd can be non-empty only when 1st is not empty.
When 2nd is not empty, m_2ndVectorMode indicates its mode of operation.
*/
typedef VmaVector<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> SuballocationVectorType;
enum SECOND_VECTOR_MODE
{
SECOND_VECTOR_EMPTY,
/*
Suballocations in 2nd vector are created later than the ones in 1st, but they
all have smaller offset.
*/
SECOND_VECTOR_RING_BUFFER,
/*
Suballocations in 2nd vector are upper side of double stack.
They all have offsets higher than those in 1st vector.
Top of this stack means smaller offsets, but higher indices in this vector.
*/
SECOND_VECTOR_DOUBLE_STACK,
};
VkDeviceSize m_SumFreeSize;
SuballocationVectorType m_Suballocations0, m_Suballocations1;
uint32_t m_1stVectorIndex;
SECOND_VECTOR_MODE m_2ndVectorMode;
// Number of items in 1st vector with hAllocation = null at the beginning.
size_t m_1stNullItemsBeginCount;
// Number of other items in 1st vector with hAllocation = null somewhere in the middle.
size_t m_1stNullItemsMiddleCount;
// Number of items in 2nd vector with hAllocation = null.
size_t m_2ndNullItemsCount;
SuballocationVectorType& AccessSuballocations1st() { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
SuballocationVectorType& AccessSuballocations2nd() { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
const SuballocationVectorType& AccessSuballocations1st() const { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
const SuballocationVectorType& AccessSuballocations2nd() const { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
VmaSuballocation& FindSuballocation(VkDeviceSize offset) const;
bool ShouldCompact1st() const;
void CleanupAfterFree();
bool CreateAllocationRequest_LowerAddress(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest);
bool CreateAllocationRequest_UpperAddress(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest);
};
#ifndef _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual)
: VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
m_SumFreeSize(0),
m_Suballocations0(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
m_Suballocations1(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
m_1stVectorIndex(0),
m_2ndVectorMode(SECOND_VECTOR_EMPTY),
m_1stNullItemsBeginCount(0),
m_1stNullItemsMiddleCount(0),
m_2ndNullItemsCount(0) {}
void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
{
VmaBlockMetadata::Init(size);
m_SumFreeSize = size;
}
bool VmaBlockMetadata_Linear::Validate() const
{
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
VMA_VALIDATE(!suballocations1st.empty() ||
suballocations2nd.empty() ||
m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
if (!suballocations1st.empty())
{
// Null item at the beginning should be accounted into m_1stNullItemsBeginCount.
VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].type != VMA_SUBALLOCATION_TYPE_FREE);
// Null item at the end should be just pop_back().
VMA_VALIDATE(suballocations1st.back().type != VMA_SUBALLOCATION_TYPE_FREE);
}
if (!suballocations2nd.empty())
{
// Null item at the end should be just pop_back().
VMA_VALIDATE(suballocations2nd.back().type != VMA_SUBALLOCATION_TYPE_FREE);
}
VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
VkDeviceSize sumUsedSize = 0;
const size_t suballoc1stCount = suballocations1st.size();
const VkDeviceSize debugMargin = GetDebugMargin();
VkDeviceSize offset = 0;
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
const size_t suballoc2ndCount = suballocations2nd.size();
size_t nullItem2ndCount = 0;
for (size_t i = 0; i < suballoc2ndCount; ++i)
{
const VmaSuballocation& suballoc = suballocations2nd[i];
const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
if (!IsVirtual())
{
VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
}
VMA_VALIDATE(suballoc.offset >= offset);
if (!currFree)
{
if (!IsVirtual())
{
VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
VMA_VALIDATE(alloc->GetSize() == suballoc.size);
}
sumUsedSize += suballoc.size;
}
else
{
++nullItem2ndCount;
}
offset = suballoc.offset + suballoc.size + debugMargin;
}
VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
}
for (size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
{
const VmaSuballocation& suballoc = suballocations1st[i];
VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
suballoc.userData == VMA_NULL);
}
size_t nullItem1stCount = m_1stNullItemsBeginCount;
for (size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
{
const VmaSuballocation& suballoc = suballocations1st[i];
const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
if (!IsVirtual())
{
VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
}
VMA_VALIDATE(suballoc.offset >= offset);
VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
if (!currFree)
{
if (!IsVirtual())
{
VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
VMA_VALIDATE(alloc->GetSize() == suballoc.size);
}
sumUsedSize += suballoc.size;
}
else
{
++nullItem1stCount;
}
offset = suballoc.offset + suballoc.size + debugMargin;
}
VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
const size_t suballoc2ndCount = suballocations2nd.size();
size_t nullItem2ndCount = 0;
for (size_t i = suballoc2ndCount; i--; )
{
const VmaSuballocation& suballoc = suballocations2nd[i];
const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
if (!IsVirtual())
{
VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
}
VMA_VALIDATE(suballoc.offset >= offset);
if (!currFree)
{
if (!IsVirtual())
{
VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
VMA_VALIDATE(alloc->GetSize() == suballoc.size);
}
sumUsedSize += suballoc.size;
}
else
{
++nullItem2ndCount;
}
offset = suballoc.offset + suballoc.size + debugMargin;
}
VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
}
VMA_VALIDATE(offset <= GetSize());
VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
return true;
}
size_t VmaBlockMetadata_Linear::GetAllocationCount() const
{
return AccessSuballocations1st().size() - m_1stNullItemsBeginCount - m_1stNullItemsMiddleCount +
AccessSuballocations2nd().size() - m_2ndNullItemsCount;
}
size_t VmaBlockMetadata_Linear::GetFreeRegionsCount() const
{
// Function only used for defragmentation, which is disabled for this algorithm
VMA_ASSERT(0);
return SIZE_MAX;
}
void VmaBlockMetadata_Linear::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
{
const VkDeviceSize size = GetSize();
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
const size_t suballoc1stCount = suballocations1st.size();
const size_t suballoc2ndCount = suballocations2nd.size();
inoutStats.statistics.blockCount++;
inoutStats.statistics.blockBytes += size;
VkDeviceSize lastOffset = 0;
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
size_t nextAlloc2ndIndex = 0;
while (lastOffset < freeSpace2ndTo1stEnd)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc2ndIndex < suballoc2ndCount &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
++nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex < suballoc2ndCount)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc2ndIndex;
}
// We are at the end.
else
{
// There is free space from lastOffset to freeSpace2ndTo1stEnd.
if (lastOffset < freeSpace2ndTo1stEnd)
{
const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// End of loop.
lastOffset = freeSpace2ndTo1stEnd;
}
}
}
size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
const VkDeviceSize freeSpace1stTo2ndEnd =
m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
while (lastOffset < freeSpace1stTo2ndEnd)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc1stIndex < suballoc1stCount &&
suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
{
++nextAlloc1stIndex;
}
// Found non-null allocation.
if (nextAlloc1stIndex < suballoc1stCount)
{
const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc1stIndex;
}
// We are at the end.
else
{
// There is free space from lastOffset to freeSpace1stTo2ndEnd.
if (lastOffset < freeSpace1stTo2ndEnd)
{
const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// End of loop.
lastOffset = freeSpace1stTo2ndEnd;
}
}
if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
while (lastOffset < size)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc2ndIndex != SIZE_MAX &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
--nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex != SIZE_MAX)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
--nextAlloc2ndIndex;
}
// We are at the end.
else
{
// There is free space from lastOffset to size.
if (lastOffset < size)
{
const VkDeviceSize unusedRangeSize = size - lastOffset;
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
}
// End of loop.
lastOffset = size;
}
}
}
}
void VmaBlockMetadata_Linear::AddStatistics(VmaStatistics& inoutStats) const
{
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
const VkDeviceSize size = GetSize();
const size_t suballoc1stCount = suballocations1st.size();
const size_t suballoc2ndCount = suballocations2nd.size();
inoutStats.blockCount++;
inoutStats.blockBytes += size;
inoutStats.allocationBytes += size - m_SumFreeSize;
VkDeviceSize lastOffset = 0;
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
while (lastOffset < freeSpace2ndTo1stEnd)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex < suballoc2ndCount &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
++nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex < suballoc2ndCount)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++inoutStats.allocationCount;
// Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc2ndIndex;
}
// We are at the end.
else
{
// End of loop.
lastOffset = freeSpace2ndTo1stEnd;
}
}
}
size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
const VkDeviceSize freeSpace1stTo2ndEnd =
m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
while (lastOffset < freeSpace1stTo2ndEnd)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc1stIndex < suballoc1stCount &&
suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
{
++nextAlloc1stIndex;
}
// Found non-null allocation.
if (nextAlloc1stIndex < suballoc1stCount)
{
const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
// Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++inoutStats.allocationCount;
// Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc1stIndex;
}
// We are at the end.
else
{
// End of loop.
lastOffset = freeSpace1stTo2ndEnd;
}
}
if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
while (lastOffset < size)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex != SIZE_MAX &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
--nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex != SIZE_MAX)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++inoutStats.allocationCount;
// Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
--nextAlloc2ndIndex;
}
// We are at the end.
else
{
// End of loop.
lastOffset = size;
}
}
}
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata_Linear::PrintDetailedMap(class VmaJsonWriter& json) const
{
const VkDeviceSize size = GetSize();
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
const size_t suballoc1stCount = suballocations1st.size();
const size_t suballoc2ndCount = suballocations2nd.size();
// FIRST PASS
size_t unusedRangeCount = 0;
VkDeviceSize usedBytes = 0;
VkDeviceSize lastOffset = 0;
size_t alloc2ndCount = 0;
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
size_t nextAlloc2ndIndex = 0;
while (lastOffset < freeSpace2ndTo1stEnd)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex < suballoc2ndCount &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
++nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex < suballoc2ndCount)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
++unusedRangeCount;
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++alloc2ndCount;
usedBytes += suballoc.size;
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc2ndIndex;
}
// We are at the end.
else
{
if (lastOffset < freeSpace2ndTo1stEnd)
{
// There is free space from lastOffset to freeSpace2ndTo1stEnd.
++unusedRangeCount;
}
// End of loop.
lastOffset = freeSpace2ndTo1stEnd;
}
}
}
size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
size_t alloc1stCount = 0;
const VkDeviceSize freeSpace1stTo2ndEnd =
m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
while (lastOffset < freeSpace1stTo2ndEnd)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc1stIndex < suballoc1stCount &&
suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
{
++nextAlloc1stIndex;
}
// Found non-null allocation.
if (nextAlloc1stIndex < suballoc1stCount)
{
const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
++unusedRangeCount;
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++alloc1stCount;
usedBytes += suballoc.size;
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc1stIndex;
}
// We are at the end.
else
{
if (lastOffset < size)
{
// There is free space from lastOffset to freeSpace1stTo2ndEnd.
++unusedRangeCount;
}
// End of loop.
lastOffset = freeSpace1stTo2ndEnd;
}
}
if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
while (lastOffset < size)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex != SIZE_MAX &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
--nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex != SIZE_MAX)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
++unusedRangeCount;
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
++alloc2ndCount;
usedBytes += suballoc.size;
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
--nextAlloc2ndIndex;
}
// We are at the end.
else
{
if (lastOffset < size)
{
// There is free space from lastOffset to size.
++unusedRangeCount;
}
// End of loop.
lastOffset = size;
}
}
}
const VkDeviceSize unusedBytes = size - usedBytes;
PrintDetailedMap_Begin(json, unusedBytes, alloc1stCount + alloc2ndCount, unusedRangeCount);
// SECOND PASS
lastOffset = 0;
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
size_t nextAlloc2ndIndex = 0;
while (lastOffset < freeSpace2ndTo1stEnd)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex < suballoc2ndCount &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
++nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex < suballoc2ndCount)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc2ndIndex;
}
// We are at the end.
else
{
if (lastOffset < freeSpace2ndTo1stEnd)
{
// There is free space from lastOffset to freeSpace2ndTo1stEnd.
const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// End of loop.
lastOffset = freeSpace2ndTo1stEnd;
}
}
}
nextAlloc1stIndex = m_1stNullItemsBeginCount;
while (lastOffset < freeSpace1stTo2ndEnd)
{
// Find next non-null allocation or move nextAllocIndex to the end.
while (nextAlloc1stIndex < suballoc1stCount &&
suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
{
++nextAlloc1stIndex;
}
// Found non-null allocation.
if (nextAlloc1stIndex < suballoc1stCount)
{
const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
++nextAlloc1stIndex;
}
// We are at the end.
else
{
if (lastOffset < freeSpace1stTo2ndEnd)
{
// There is free space from lastOffset to freeSpace1stTo2ndEnd.
const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// End of loop.
lastOffset = freeSpace1stTo2ndEnd;
}
}
if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
while (lastOffset < size)
{
// Find next non-null allocation or move nextAlloc2ndIndex to the end.
while (nextAlloc2ndIndex != SIZE_MAX &&
suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
{
--nextAlloc2ndIndex;
}
// Found non-null allocation.
if (nextAlloc2ndIndex != SIZE_MAX)
{
const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
// 1. Process free space before this allocation.
if (lastOffset < suballoc.offset)
{
// There is free space from lastOffset to suballoc.offset.
const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// 2. Process this allocation.
// There is allocation with suballoc.offset, suballoc.size.
PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
// 3. Prepare for next iteration.
lastOffset = suballoc.offset + suballoc.size;
--nextAlloc2ndIndex;
}
// We are at the end.
else
{
if (lastOffset < size)
{
// There is free space from lastOffset to size.
const VkDeviceSize unusedRangeSize = size - lastOffset;
PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
}
// End of loop.
lastOffset = size;
}
}
}
PrintDetailedMap_End(json);
}
#endif // VMA_STATS_STRING_ENABLED
bool VmaBlockMetadata_Linear::CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
VMA_ASSERT(allocSize > 0);
VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
VMA_ASSERT(pAllocationRequest != VMA_NULL);
VMA_HEAVY_ASSERT(Validate());
pAllocationRequest->size = allocSize;
return upperAddress ?
CreateAllocationRequest_UpperAddress(
allocSize, allocAlignment, allocType, strategy, pAllocationRequest) :
CreateAllocationRequest_LowerAddress(
allocSize, allocAlignment, allocType, strategy, pAllocationRequest);
}
VkResult VmaBlockMetadata_Linear::CheckCorruption(const void* pBlockData)
{
VMA_ASSERT(!IsVirtual());
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
for (size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
{
const VmaSuballocation& suballoc = suballocations1st[i];
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
{
if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
{
VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
return VK_ERROR_UNKNOWN_COPY;
}
}
}
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
for (size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
{
const VmaSuballocation& suballoc = suballocations2nd[i];
if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
{
if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
{
VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
return VK_ERROR_UNKNOWN_COPY;
}
}
}
return VK_SUCCESS;
}
void VmaBlockMetadata_Linear::Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData)
{
const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
const VmaSuballocation newSuballoc = { offset, request.size, userData, type };
switch (request.type)
{
case VmaAllocationRequestType::UpperAddress:
{
VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
"CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
suballocations2nd.push_back(newSuballoc);
m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
}
break;
case VmaAllocationRequestType::EndOf1st:
{
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
VMA_ASSERT(suballocations1st.empty() ||
offset >= suballocations1st.back().offset + suballocations1st.back().size);
// Check if it fits before the end of the block.
VMA_ASSERT(offset + request.size <= GetSize());
suballocations1st.push_back(newSuballoc);
}
break;
case VmaAllocationRequestType::EndOf2nd:
{
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
// New allocation at the end of 2-part ring buffer, so before first allocation from 1st vector.
VMA_ASSERT(!suballocations1st.empty() &&
offset + request.size <= suballocations1st[m_1stNullItemsBeginCount].offset);
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
switch (m_2ndVectorMode)
{
case SECOND_VECTOR_EMPTY:
// First allocation from second part ring buffer.
VMA_ASSERT(suballocations2nd.empty());
m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
break;
case SECOND_VECTOR_RING_BUFFER:
// 2-part ring buffer is already started.
VMA_ASSERT(!suballocations2nd.empty());
break;
case SECOND_VECTOR_DOUBLE_STACK:
VMA_ASSERT(0 && "CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
break;
default:
VMA_ASSERT(0);
}
suballocations2nd.push_back(newSuballoc);
}
break;
default:
VMA_ASSERT(0 && "CRITICAL INTERNAL ERROR.");
}
m_SumFreeSize -= newSuballoc.size;
}
void VmaBlockMetadata_Linear::Free(VmaAllocHandle allocHandle)
{
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
VkDeviceSize offset = (VkDeviceSize)allocHandle - 1;
if (!suballocations1st.empty())
{
// First allocation: Mark it as next empty at the beginning.
VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
if (firstSuballoc.offset == offset)
{
firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
firstSuballoc.userData = VMA_NULL;
m_SumFreeSize += firstSuballoc.size;
++m_1stNullItemsBeginCount;
CleanupAfterFree();
return;
}
}
// Last allocation in 2-part ring buffer or top of upper stack (same logic).
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
VmaSuballocation& lastSuballoc = suballocations2nd.back();
if (lastSuballoc.offset == offset)
{
m_SumFreeSize += lastSuballoc.size;
suballocations2nd.pop_back();
CleanupAfterFree();
return;
}
}
// Last allocation in 1st vector.
else if (m_2ndVectorMode == SECOND_VECTOR_EMPTY)
{
VmaSuballocation& lastSuballoc = suballocations1st.back();
if (lastSuballoc.offset == offset)
{
m_SumFreeSize += lastSuballoc.size;
suballocations1st.pop_back();
CleanupAfterFree();
return;
}
}
VmaSuballocation refSuballoc;
refSuballoc.offset = offset;
// Rest of members stays uninitialized intentionally for better performance.
// Item from the middle of 1st vector.
{
const SuballocationVectorType::iterator it = VmaBinaryFindSorted(
suballocations1st.begin() + m_1stNullItemsBeginCount,
suballocations1st.end(),
refSuballoc,
VmaSuballocationOffsetLess());
if (it != suballocations1st.end())
{
it->type = VMA_SUBALLOCATION_TYPE_FREE;
it->userData = VMA_NULL;
++m_1stNullItemsMiddleCount;
m_SumFreeSize += it->size;
CleanupAfterFree();
return;
}
}
if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
{
// Item from the middle of 2nd vector.
const SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
if (it != suballocations2nd.end())
{
it->type = VMA_SUBALLOCATION_TYPE_FREE;
it->userData = VMA_NULL;
++m_2ndNullItemsCount;
m_SumFreeSize += it->size;
CleanupAfterFree();
return;
}
}
VMA_ASSERT(0 && "Allocation to free not found in linear allocator!");
}
void VmaBlockMetadata_Linear::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
{
outInfo.offset = (VkDeviceSize)allocHandle - 1;
VmaSuballocation& suballoc = FindSuballocation(outInfo.offset);
outInfo.size = suballoc.size;
outInfo.pUserData = suballoc.userData;
}
void* VmaBlockMetadata_Linear::GetAllocationUserData(VmaAllocHandle allocHandle) const
{
return FindSuballocation((VkDeviceSize)allocHandle - 1).userData;
}
VmaAllocHandle VmaBlockMetadata_Linear::GetAllocationListBegin() const
{
// Function only used for defragmentation, which is disabled for this algorithm
VMA_ASSERT(0);
return VK_NULL_HANDLE;
}
VmaAllocHandle VmaBlockMetadata_Linear::GetNextAllocation(VmaAllocHandle prevAlloc) const
{
// Function only used for defragmentation, which is disabled for this algorithm
VMA_ASSERT(0);
return VK_NULL_HANDLE;
}
VkDeviceSize VmaBlockMetadata_Linear::GetNextFreeRegionSize(VmaAllocHandle alloc) const
{
// Function only used for defragmentation, which is disabled for this algorithm
VMA_ASSERT(0);
return 0;
}
void VmaBlockMetadata_Linear::Clear()
{
m_SumFreeSize = GetSize();
m_Suballocations0.clear();
m_Suballocations1.clear();
// Leaving m_1stVectorIndex unchanged - it doesn't matter.
m_2ndVectorMode = SECOND_VECTOR_EMPTY;
m_1stNullItemsBeginCount = 0;
m_1stNullItemsMiddleCount = 0;
m_2ndNullItemsCount = 0;
}
void VmaBlockMetadata_Linear::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
{
VmaSuballocation& suballoc = FindSuballocation((VkDeviceSize)allocHandle - 1);
suballoc.userData = userData;
}
void VmaBlockMetadata_Linear::DebugLogAllAllocations() const
{
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
for (auto it = suballocations1st.begin() + m_1stNullItemsBeginCount; it != suballocations1st.end(); ++it)
if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
DebugLogAllocation(it->offset, it->size, it->userData);
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
for (auto it = suballocations2nd.begin(); it != suballocations2nd.end(); ++it)
if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
DebugLogAllocation(it->offset, it->size, it->userData);
}
VmaSuballocation& VmaBlockMetadata_Linear::FindSuballocation(VkDeviceSize offset) const
{
const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
VmaSuballocation refSuballoc;
refSuballoc.offset = offset;
// Rest of members stays uninitialized intentionally for better performance.
// Item from the 1st vector.
{
SuballocationVectorType::const_iterator it = VmaBinaryFindSorted(
suballocations1st.begin() + m_1stNullItemsBeginCount,
suballocations1st.end(),
refSuballoc,
VmaSuballocationOffsetLess());
if (it != suballocations1st.end())
{
return const_cast<VmaSuballocation&>(*it);
}
}
if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
{
// Rest of members stays uninitialized intentionally for better performance.
SuballocationVectorType::const_iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
if (it != suballocations2nd.end())
{
return const_cast<VmaSuballocation&>(*it);
}
}
VMA_ASSERT(0 && "Allocation not found in linear allocator!");
return const_cast<VmaSuballocation&>(suballocations1st.back()); // Should never occur.
}
bool VmaBlockMetadata_Linear::ShouldCompact1st() const
{
const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
const size_t suballocCount = AccessSuballocations1st().size();
return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
}
void VmaBlockMetadata_Linear::CleanupAfterFree()
{
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
if (IsEmpty())
{
suballocations1st.clear();
suballocations2nd.clear();
m_1stNullItemsBeginCount = 0;
m_1stNullItemsMiddleCount = 0;
m_2ndNullItemsCount = 0;
m_2ndVectorMode = SECOND_VECTOR_EMPTY;
}
else
{
const size_t suballoc1stCount = suballocations1st.size();
const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
// Find more null items at the beginning of 1st vector.
while (m_1stNullItemsBeginCount < suballoc1stCount &&
suballocations1st[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
{
++m_1stNullItemsBeginCount;
--m_1stNullItemsMiddleCount;
}
// Find more null items at the end of 1st vector.
while (m_1stNullItemsMiddleCount > 0 &&
suballocations1st.back().type == VMA_SUBALLOCATION_TYPE_FREE)
{
--m_1stNullItemsMiddleCount;
suballocations1st.pop_back();
}
// Find more null items at the end of 2nd vector.
while (m_2ndNullItemsCount > 0 &&
suballocations2nd.back().type == VMA_SUBALLOCATION_TYPE_FREE)
{
--m_2ndNullItemsCount;
suballocations2nd.pop_back();
}
// Find more null items at the beginning of 2nd vector.
while (m_2ndNullItemsCount > 0 &&
suballocations2nd[0].type == VMA_SUBALLOCATION_TYPE_FREE)
{
--m_2ndNullItemsCount;
VmaVectorRemove(suballocations2nd, 0);
}
if (ShouldCompact1st())
{
const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
size_t srcIndex = m_1stNullItemsBeginCount;
for (size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
{
while (suballocations1st[srcIndex].type == VMA_SUBALLOCATION_TYPE_FREE)
{
++srcIndex;
}
if (dstIndex != srcIndex)
{
suballocations1st[dstIndex] = suballocations1st[srcIndex];
}
++srcIndex;
}
suballocations1st.resize(nonNullItemCount);
m_1stNullItemsBeginCount = 0;
m_1stNullItemsMiddleCount = 0;
}
// 2nd vector became empty.
if (suballocations2nd.empty())
{
m_2ndVectorMode = SECOND_VECTOR_EMPTY;
}
// 1st vector became empty.
if (suballocations1st.size() - m_1stNullItemsBeginCount == 0)
{
suballocations1st.clear();
m_1stNullItemsBeginCount = 0;
if (!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
// Swap 1st with 2nd. Now 2nd is empty.
m_2ndVectorMode = SECOND_VECTOR_EMPTY;
m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
while (m_1stNullItemsBeginCount < suballocations2nd.size() &&
suballocations2nd[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
{
++m_1stNullItemsBeginCount;
--m_1stNullItemsMiddleCount;
}
m_2ndNullItemsCount = 0;
m_1stVectorIndex ^= 1;
}
}
}
VMA_HEAVY_ASSERT(Validate());
}
bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
const VkDeviceSize blockSize = GetSize();
const VkDeviceSize debugMargin = GetDebugMargin();
const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
// Try to allocate at the end of 1st vector.
VkDeviceSize resultBaseOffset = 0;
if (!suballocations1st.empty())
{
const VmaSuballocation& lastSuballoc = suballocations1st.back();
resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
}
// Start from offset equal to beginning of free space.
VkDeviceSize resultOffset = resultBaseOffset;
// Apply alignment.
resultOffset = VmaAlignUp(resultOffset, allocAlignment);
// Check previous suballocations for BufferImageGranularity conflicts.
// Make bigger alignment if necessary.
if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
{
bool bufferImageGranularityConflict = false;
for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
{
const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
{
bufferImageGranularityConflict = true;
break;
}
}
else
// Already on previous page.
break;
}
if (bufferImageGranularityConflict)
{
resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
}
}
const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
suballocations2nd.back().offset : blockSize;
// There is enough free space at the end after alignment.
if (resultOffset + allocSize + debugMargin <= freeSpaceEnd)
{
// Check next suballocations for BufferImageGranularity conflicts.
// If conflict exists, allocation cannot be made here.
if ((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
{
for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
{
const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
{
return false;
}
}
else
{
// Already on previous page.
break;
}
}
}
// All tests passed: Success.
pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
// pAllocationRequest->item, customData unused.
pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
return true;
}
}
// Wrap-around to end of 2nd vector. Try to allocate there, watching for the
// beginning of 1st vector as the end of free space.
if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
VMA_ASSERT(!suballocations1st.empty());
VkDeviceSize resultBaseOffset = 0;
if (!suballocations2nd.empty())
{
const VmaSuballocation& lastSuballoc = suballocations2nd.back();
resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
}
// Start from offset equal to beginning of free space.
VkDeviceSize resultOffset = resultBaseOffset;
// Apply alignment.
resultOffset = VmaAlignUp(resultOffset, allocAlignment);
// Check previous suballocations for BufferImageGranularity conflicts.
// Make bigger alignment if necessary.
if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
{
bool bufferImageGranularityConflict = false;
for (size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
{
const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
{
bufferImageGranularityConflict = true;
break;
}
}
else
// Already on previous page.
break;
}
if (bufferImageGranularityConflict)
{
resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
}
}
size_t index1st = m_1stNullItemsBeginCount;
// There is enough free space at the end after alignment.
if ((index1st == suballocations1st.size() && resultOffset + allocSize + debugMargin <= blockSize) ||
(index1st < suballocations1st.size() && resultOffset + allocSize + debugMargin <= suballocations1st[index1st].offset))
{
// Check next suballocations for BufferImageGranularity conflicts.
// If conflict exists, allocation cannot be made here.
if (allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
{
for (size_t nextSuballocIndex = index1st;
nextSuballocIndex < suballocations1st.size();
nextSuballocIndex++)
{
const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
{
return false;
}
}
else
{
// Already on next page.
break;
}
}
}
// All tests passed: Success.
pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
// pAllocationRequest->item, customData unused.
return true;
}
}
return false;
}
bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
const VkDeviceSize blockSize = GetSize();
const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
SuballocationVectorType& suballocations1st = AccessSuballocations1st();
SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
{
VMA_ASSERT(0 && "Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
return false;
}
// Try to allocate before 2nd.back(), or end of block if 2nd.empty().
if (allocSize > blockSize)
{
return false;
}
VkDeviceSize resultBaseOffset = blockSize - allocSize;
if (!suballocations2nd.empty())
{
const VmaSuballocation& lastSuballoc = suballocations2nd.back();
resultBaseOffset = lastSuballoc.offset - allocSize;
if (allocSize > lastSuballoc.offset)
{
return false;
}
}
// Start from offset equal to end of free space.
VkDeviceSize resultOffset = resultBaseOffset;
const VkDeviceSize debugMargin = GetDebugMargin();
// Apply debugMargin at the end.
if (debugMargin > 0)
{
if (resultOffset < debugMargin)
{
return false;
}
resultOffset -= debugMargin;
}
// Apply alignment.
resultOffset = VmaAlignDown(resultOffset, allocAlignment);
// Check next suballocations from 2nd for BufferImageGranularity conflicts.
// Make bigger alignment if necessary.
if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
{
bool bufferImageGranularityConflict = false;
for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
{
const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(nextSuballoc.type, allocType))
{
bufferImageGranularityConflict = true;
break;
}
}
else
// Already on previous page.
break;
}
if (bufferImageGranularityConflict)
{
resultOffset = VmaAlignDown(resultOffset, bufferImageGranularity);
}
}
// There is enough free space.
const VkDeviceSize endOf1st = !suballocations1st.empty() ?
suballocations1st.back().offset + suballocations1st.back().size :
0;
if (endOf1st + debugMargin <= resultOffset)
{
// Check previous suballocations for BufferImageGranularity conflicts.
// If conflict exists, allocation cannot be made here.
if (bufferImageGranularity > 1)
{
for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
{
const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
{
if (VmaIsBufferImageGranularityConflict(allocType, prevSuballoc.type))
{
return false;
}
}
else
{
// Already on next page.
break;
}
}
}
// All tests passed: Success.
pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
// pAllocationRequest->item unused.
pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
return true;
}
return false;
}
#endif // _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
#endif // _VMA_BLOCK_METADATA_LINEAR
#if 0
#ifndef _VMA_BLOCK_METADATA_BUDDY
/*
- GetSize() is the original size of allocated memory block.
- m_UsableSize is this size aligned down to a power of two.
All allocations and calculations happen relative to m_UsableSize.
- GetUnusableSize() is the difference between them.
It is reported as separate, unused range, not available for allocations.
Node at level 0 has size = m_UsableSize.
Each next level contains nodes with size 2 times smaller than current level.
m_LevelCount is the maximum number of levels to use in the current object.
*/
class VmaBlockMetadata_Buddy : public VmaBlockMetadata
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_Buddy)
public:
VmaBlockMetadata_Buddy(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual);
virtual ~VmaBlockMetadata_Buddy();
size_t GetAllocationCount() const override { return m_AllocationCount; }
VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize + GetUnusableSize(); }
bool IsEmpty() const override { return m_Root->type == Node::TYPE_FREE; }
VkResult CheckCorruption(const void* pBlockData) override { return VK_ERROR_FEATURE_NOT_PRESENT; }
VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; }
void DebugLogAllAllocations() const override { DebugLogAllAllocationNode(m_Root, 0); }
void Init(VkDeviceSize size) override;
bool Validate() const override;
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
void AddStatistics(VmaStatistics& inoutStats) const override;
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const override;
#endif
bool CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest) override;
void Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData) override;
void Free(VmaAllocHandle allocHandle) override;
void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
VmaAllocHandle GetAllocationListBegin() const override;
VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
void Clear() override;
void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
private:
static const size_t MAX_LEVELS = 48;
struct ValidationContext
{
size_t calculatedAllocationCount = 0;
size_t calculatedFreeCount = 0;
VkDeviceSize calculatedSumFreeSize = 0;
};
struct Node
{
VkDeviceSize offset;
enum TYPE
{
TYPE_FREE,
TYPE_ALLOCATION,
TYPE_SPLIT,
TYPE_COUNT
} type;
Node* parent;
Node* buddy;
union
{
struct
{
Node* prev;
Node* next;
} free;
struct
{
void* userData;
} allocation;
struct
{
Node* leftChild;
} split;
};
};
// Size of the memory block aligned down to a power of two.
VkDeviceSize m_UsableSize;
uint32_t m_LevelCount;
VmaPoolAllocator<Node> m_NodeAllocator;
Node* m_Root;
struct
{
Node* front;
Node* back;
} m_FreeList[MAX_LEVELS];
// Number of nodes in the tree with type == TYPE_ALLOCATION.
size_t m_AllocationCount;
// Number of nodes in the tree with type == TYPE_FREE.
size_t m_FreeCount;
// Doesn't include space wasted due to internal fragmentation - allocation sizes are just aligned up to node sizes.
// Doesn't include unusable size.
VkDeviceSize m_SumFreeSize;
VkDeviceSize GetUnusableSize() const { return GetSize() - m_UsableSize; }
VkDeviceSize LevelToNodeSize(uint32_t level) const { return m_UsableSize >> level; }
VkDeviceSize AlignAllocationSize(VkDeviceSize size) const
{
if (!IsVirtual())
{
size = VmaAlignUp(size, (VkDeviceSize)16);
}
return VmaNextPow2(size);
}
Node* FindAllocationNode(VkDeviceSize offset, uint32_t& outLevel) const;
void DeleteNodeChildren(Node* node);
bool ValidateNode(ValidationContext& ctx, const Node* parent, const Node* curr, uint32_t level, VkDeviceSize levelNodeSize) const;
uint32_t AllocSizeToLevel(VkDeviceSize allocSize) const;
void AddNodeToDetailedStatistics(VmaDetailedStatistics& inoutStats, const Node* node, VkDeviceSize levelNodeSize) const;
// Adds node to the front of FreeList at given level.
// node->type must be FREE.
// node->free.prev, next can be undefined.
void AddToFreeListFront(uint32_t level, Node* node);
// Removes node from FreeList at given level.
// node->type must be FREE.
// node->free.prev, next stay untouched.
void RemoveFromFreeList(uint32_t level, Node* node);
void DebugLogAllAllocationNode(Node* node, uint32_t level) const;
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMapNode(class VmaJsonWriter& json, const Node* node, VkDeviceSize levelNodeSize) const;
#endif
};
#ifndef _VMA_BLOCK_METADATA_BUDDY_FUNCTIONS
VmaBlockMetadata_Buddy::VmaBlockMetadata_Buddy(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual)
: VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
m_NodeAllocator(pAllocationCallbacks, 32), // firstBlockCapacity
m_Root(VMA_NULL),
m_AllocationCount(0),
m_FreeCount(1),
m_SumFreeSize(0)
{
memset(m_FreeList, 0, sizeof(m_FreeList));
}
VmaBlockMetadata_Buddy::~VmaBlockMetadata_Buddy()
{
DeleteNodeChildren(m_Root);
m_NodeAllocator.Free(m_Root);
}
void VmaBlockMetadata_Buddy::Init(VkDeviceSize size)
{
VmaBlockMetadata::Init(size);
m_UsableSize = VmaPrevPow2(size);
m_SumFreeSize = m_UsableSize;
// Calculate m_LevelCount.
const VkDeviceSize minNodeSize = IsVirtual() ? 1 : 16;
m_LevelCount = 1;
while (m_LevelCount < MAX_LEVELS &&
LevelToNodeSize(m_LevelCount) >= minNodeSize)
{
++m_LevelCount;
}
Node* rootNode = m_NodeAllocator.Alloc();
rootNode->offset = 0;
rootNode->type = Node::TYPE_FREE;
rootNode->parent = VMA_NULL;
rootNode->buddy = VMA_NULL;
m_Root = rootNode;
AddToFreeListFront(0, rootNode);
}
bool VmaBlockMetadata_Buddy::Validate() const
{
// Validate tree.
ValidationContext ctx;
if (!ValidateNode(ctx, VMA_NULL, m_Root, 0, LevelToNodeSize(0)))
{
VMA_VALIDATE(false && "ValidateNode failed.");
}
VMA_VALIDATE(m_AllocationCount == ctx.calculatedAllocationCount);
VMA_VALIDATE(m_SumFreeSize == ctx.calculatedSumFreeSize);
// Validate free node lists.
for (uint32_t level = 0; level < m_LevelCount; ++level)
{
VMA_VALIDATE(m_FreeList[level].front == VMA_NULL ||
m_FreeList[level].front->free.prev == VMA_NULL);
for (Node* node = m_FreeList[level].front;
node != VMA_NULL;
node = node->free.next)
{
VMA_VALIDATE(node->type == Node::TYPE_FREE);
if (node->free.next == VMA_NULL)
{
VMA_VALIDATE(m_FreeList[level].back == node);
}
else
{
VMA_VALIDATE(node->free.next->free.prev == node);
}
}
}
// Validate that free lists ar higher levels are empty.
for (uint32_t level = m_LevelCount; level < MAX_LEVELS; ++level)
{
VMA_VALIDATE(m_FreeList[level].front == VMA_NULL && m_FreeList[level].back == VMA_NULL);
}
return true;
}
void VmaBlockMetadata_Buddy::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
{
inoutStats.statistics.blockCount++;
inoutStats.statistics.blockBytes += GetSize();
AddNodeToDetailedStatistics(inoutStats, m_Root, LevelToNodeSize(0));
const VkDeviceSize unusableSize = GetUnusableSize();
if (unusableSize > 0)
VmaAddDetailedStatisticsUnusedRange(inoutStats, unusableSize);
}
void VmaBlockMetadata_Buddy::AddStatistics(VmaStatistics& inoutStats) const
{
inoutStats.blockCount++;
inoutStats.allocationCount += (uint32_t)m_AllocationCount;
inoutStats.blockBytes += GetSize();
inoutStats.allocationBytes += GetSize() - m_SumFreeSize;
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata_Buddy::PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const
{
VmaDetailedStatistics stats;
VmaClearDetailedStatistics(stats);
AddDetailedStatistics(stats);
PrintDetailedMap_Begin(
json,
stats.statistics.blockBytes - stats.statistics.allocationBytes,
stats.statistics.allocationCount,
stats.unusedRangeCount,
mapRefCount);
PrintDetailedMapNode(json, m_Root, LevelToNodeSize(0));
const VkDeviceSize unusableSize = GetUnusableSize();
if (unusableSize > 0)
{
PrintDetailedMap_UnusedRange(json,
m_UsableSize, // offset
unusableSize); // size
}
PrintDetailedMap_End(json);
}
#endif // VMA_STATS_STRING_ENABLED
bool VmaBlockMetadata_Buddy::CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
allocSize = AlignAllocationSize(allocSize);
// Simple way to respect bufferImageGranularity. May be optimized some day.
// Whenever it might be an OPTIMAL image...
if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
{
allocAlignment = VMA_MAX(allocAlignment, GetBufferImageGranularity());
allocSize = VmaAlignUp(allocSize, GetBufferImageGranularity());
}
if (allocSize > m_UsableSize)
{
return false;
}
const uint32_t targetLevel = AllocSizeToLevel(allocSize);
for (uint32_t level = targetLevel; level--; )
{
for (Node* freeNode = m_FreeList[level].front;
freeNode != VMA_NULL;
freeNode = freeNode->free.next)
{
if (freeNode->offset % allocAlignment == 0)
{
pAllocationRequest->type = VmaAllocationRequestType::Normal;
pAllocationRequest->allocHandle = (VmaAllocHandle)(freeNode->offset + 1);
pAllocationRequest->size = allocSize;
pAllocationRequest->customData = (void*)(uintptr_t)level;
return true;
}
}
}
return false;
}
void VmaBlockMetadata_Buddy::Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData)
{
VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
const uint32_t targetLevel = AllocSizeToLevel(request.size);
uint32_t currLevel = (uint32_t)(uintptr_t)request.customData;
Node* currNode = m_FreeList[currLevel].front;
VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
while (currNode->offset != offset)
{
currNode = currNode->free.next;
VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
}
// Go down, splitting free nodes.
while (currLevel < targetLevel)
{
// currNode is already first free node at currLevel.
// Remove it from list of free nodes at this currLevel.
RemoveFromFreeList(currLevel, currNode);
const uint32_t childrenLevel = currLevel + 1;
// Create two free sub-nodes.
Node* leftChild = m_NodeAllocator.Alloc();
Node* rightChild = m_NodeAllocator.Alloc();
leftChild->offset = currNode->offset;
leftChild->type = Node::TYPE_FREE;
leftChild->parent = currNode;
leftChild->buddy = rightChild;
rightChild->offset = currNode->offset + LevelToNodeSize(childrenLevel);
rightChild->type = Node::TYPE_FREE;
rightChild->parent = currNode;
rightChild->buddy = leftChild;
// Convert current currNode to split type.
currNode->type = Node::TYPE_SPLIT;
currNode->split.leftChild = leftChild;
// Add child nodes to free list. Order is important!
AddToFreeListFront(childrenLevel, rightChild);
AddToFreeListFront(childrenLevel, leftChild);
++m_FreeCount;
++currLevel;
currNode = m_FreeList[currLevel].front;
/*
We can be sure that currNode, as left child of node previously split,
also fulfills the alignment requirement.
*/
}
// Remove from free list.
VMA_ASSERT(currLevel == targetLevel &&
currNode != VMA_NULL &&
currNode->type == Node::TYPE_FREE);
RemoveFromFreeList(currLevel, currNode);
// Convert to allocation node.
currNode->type = Node::TYPE_ALLOCATION;
currNode->allocation.userData = userData;
++m_AllocationCount;
--m_FreeCount;
m_SumFreeSize -= request.size;
}
void VmaBlockMetadata_Buddy::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
{
uint32_t level = 0;
outInfo.offset = (VkDeviceSize)allocHandle - 1;
const Node* const node = FindAllocationNode(outInfo.offset, level);
outInfo.size = LevelToNodeSize(level);
outInfo.pUserData = node->allocation.userData;
}
void* VmaBlockMetadata_Buddy::GetAllocationUserData(VmaAllocHandle allocHandle) const
{
uint32_t level = 0;
const Node* const node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
return node->allocation.userData;
}
VmaAllocHandle VmaBlockMetadata_Buddy::GetAllocationListBegin() const
{
// Function only used for defragmentation, which is disabled for this algorithm
return VK_NULL_HANDLE;
}
VmaAllocHandle VmaBlockMetadata_Buddy::GetNextAllocation(VmaAllocHandle prevAlloc) const
{
// Function only used for defragmentation, which is disabled for this algorithm
return VK_NULL_HANDLE;
}
void VmaBlockMetadata_Buddy::DeleteNodeChildren(Node* node)
{
if (node->type == Node::TYPE_SPLIT)
{
DeleteNodeChildren(node->split.leftChild->buddy);
DeleteNodeChildren(node->split.leftChild);
const VkAllocationCallbacks* allocationCallbacks = GetAllocationCallbacks();
m_NodeAllocator.Free(node->split.leftChild->buddy);
m_NodeAllocator.Free(node->split.leftChild);
}
}
void VmaBlockMetadata_Buddy::Clear()
{
DeleteNodeChildren(m_Root);
m_Root->type = Node::TYPE_FREE;
m_AllocationCount = 0;
m_FreeCount = 1;
m_SumFreeSize = m_UsableSize;
}
void VmaBlockMetadata_Buddy::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
{
uint32_t level = 0;
Node* const node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
node->allocation.userData = userData;
}
VmaBlockMetadata_Buddy::Node* VmaBlockMetadata_Buddy::FindAllocationNode(VkDeviceSize offset, uint32_t& outLevel) const
{
Node* node = m_Root;
VkDeviceSize nodeOffset = 0;
outLevel = 0;
VkDeviceSize levelNodeSize = LevelToNodeSize(0);
while (node->type == Node::TYPE_SPLIT)
{
const VkDeviceSize nextLevelNodeSize = levelNodeSize >> 1;
if (offset < nodeOffset + nextLevelNodeSize)
{
node = node->split.leftChild;
}
else
{
node = node->split.leftChild->buddy;
nodeOffset += nextLevelNodeSize;
}
++outLevel;
levelNodeSize = nextLevelNodeSize;
}
VMA_ASSERT(node != VMA_NULL && node->type == Node::TYPE_ALLOCATION);
return node;
}
bool VmaBlockMetadata_Buddy::ValidateNode(ValidationContext& ctx, const Node* parent, const Node* curr, uint32_t level, VkDeviceSize levelNodeSize) const
{
VMA_VALIDATE(level < m_LevelCount);
VMA_VALIDATE(curr->parent == parent);
VMA_VALIDATE((curr->buddy == VMA_NULL) == (parent == VMA_NULL));
VMA_VALIDATE(curr->buddy == VMA_NULL || curr->buddy->buddy == curr);
switch (curr->type)
{
case Node::TYPE_FREE:
// curr->free.prev, next are validated separately.
ctx.calculatedSumFreeSize += levelNodeSize;
++ctx.calculatedFreeCount;
break;
case Node::TYPE_ALLOCATION:
++ctx.calculatedAllocationCount;
if (!IsVirtual())
{
VMA_VALIDATE(curr->allocation.userData != VMA_NULL);
}
break;
case Node::TYPE_SPLIT:
{
const uint32_t childrenLevel = level + 1;
const VkDeviceSize childrenLevelNodeSize = levelNodeSize >> 1;
const Node* const leftChild = curr->split.leftChild;
VMA_VALIDATE(leftChild != VMA_NULL);
VMA_VALIDATE(leftChild->offset == curr->offset);
if (!ValidateNode(ctx, curr, leftChild, childrenLevel, childrenLevelNodeSize))
{
VMA_VALIDATE(false && "ValidateNode for left child failed.");
}
const Node* const rightChild = leftChild->buddy;
VMA_VALIDATE(rightChild->offset == curr->offset + childrenLevelNodeSize);
if (!ValidateNode(ctx, curr, rightChild, childrenLevel, childrenLevelNodeSize))
{
VMA_VALIDATE(false && "ValidateNode for right child failed.");
}
}
break;
default:
return false;
}
return true;
}
uint32_t VmaBlockMetadata_Buddy::AllocSizeToLevel(VkDeviceSize allocSize) const
{
// I know this could be optimized somehow e.g. by using std::log2p1 from C++20.
uint32_t level = 0;
VkDeviceSize currLevelNodeSize = m_UsableSize;
VkDeviceSize nextLevelNodeSize = currLevelNodeSize >> 1;
while (allocSize <= nextLevelNodeSize && level + 1 < m_LevelCount)
{
++level;
currLevelNodeSize >>= 1;
nextLevelNodeSize >>= 1;
}
return level;
}
void VmaBlockMetadata_Buddy::Free(VmaAllocHandle allocHandle)
{
uint32_t level = 0;
Node* node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
++m_FreeCount;
--m_AllocationCount;
m_SumFreeSize += LevelToNodeSize(level);
node->type = Node::TYPE_FREE;
// Join free nodes if possible.
while (level > 0 && node->buddy->type == Node::TYPE_FREE)
{
RemoveFromFreeList(level, node->buddy);
Node* const parent = node->parent;
m_NodeAllocator.Free(node->buddy);
m_NodeAllocator.Free(node);
parent->type = Node::TYPE_FREE;
node = parent;
--level;
--m_FreeCount;
}
AddToFreeListFront(level, node);
}
void VmaBlockMetadata_Buddy::AddNodeToDetailedStatistics(VmaDetailedStatistics& inoutStats, const Node* node, VkDeviceSize levelNodeSize) const
{
switch (node->type)
{
case Node::TYPE_FREE:
VmaAddDetailedStatisticsUnusedRange(inoutStats, levelNodeSize);
break;
case Node::TYPE_ALLOCATION:
VmaAddDetailedStatisticsAllocation(inoutStats, levelNodeSize);
break;
case Node::TYPE_SPLIT:
{
const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
const Node* const leftChild = node->split.leftChild;
AddNodeToDetailedStatistics(inoutStats, leftChild, childrenNodeSize);
const Node* const rightChild = leftChild->buddy;
AddNodeToDetailedStatistics(inoutStats, rightChild, childrenNodeSize);
}
break;
default:
VMA_ASSERT(0);
}
}
void VmaBlockMetadata_Buddy::AddToFreeListFront(uint32_t level, Node* node)
{
VMA_ASSERT(node->type == Node::TYPE_FREE);
// List is empty.
Node* const frontNode = m_FreeList[level].front;
if (frontNode == VMA_NULL)
{
VMA_ASSERT(m_FreeList[level].back == VMA_NULL);
node->free.prev = node->free.next = VMA_NULL;
m_FreeList[level].front = m_FreeList[level].back = node;
}
else
{
VMA_ASSERT(frontNode->free.prev == VMA_NULL);
node->free.prev = VMA_NULL;
node->free.next = frontNode;
frontNode->free.prev = node;
m_FreeList[level].front = node;
}
}
void VmaBlockMetadata_Buddy::RemoveFromFreeList(uint32_t level, Node* node)
{
VMA_ASSERT(m_FreeList[level].front != VMA_NULL);
// It is at the front.
if (node->free.prev == VMA_NULL)
{
VMA_ASSERT(m_FreeList[level].front == node);
m_FreeList[level].front = node->free.next;
}
else
{
Node* const prevFreeNode = node->free.prev;
VMA_ASSERT(prevFreeNode->free.next == node);
prevFreeNode->free.next = node->free.next;
}
// It is at the back.
if (node->free.next == VMA_NULL)
{
VMA_ASSERT(m_FreeList[level].back == node);
m_FreeList[level].back = node->free.prev;
}
else
{
Node* const nextFreeNode = node->free.next;
VMA_ASSERT(nextFreeNode->free.prev == node);
nextFreeNode->free.prev = node->free.prev;
}
}
void VmaBlockMetadata_Buddy::DebugLogAllAllocationNode(Node* node, uint32_t level) const
{
switch (node->type)
{
case Node::TYPE_FREE:
break;
case Node::TYPE_ALLOCATION:
DebugLogAllocation(node->offset, LevelToNodeSize(level), node->allocation.userData);
break;
case Node::TYPE_SPLIT:
{
++level;
DebugLogAllAllocationNode(node->split.leftChild, level);
DebugLogAllAllocationNode(node->split.leftChild->buddy, level);
}
break;
default:
VMA_ASSERT(0);
}
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata_Buddy::PrintDetailedMapNode(class VmaJsonWriter& json, const Node* node, VkDeviceSize levelNodeSize) const
{
switch (node->type)
{
case Node::TYPE_FREE:
PrintDetailedMap_UnusedRange(json, node->offset, levelNodeSize);
break;
case Node::TYPE_ALLOCATION:
PrintDetailedMap_Allocation(json, node->offset, levelNodeSize, node->allocation.userData);
break;
case Node::TYPE_SPLIT:
{
const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
const Node* const leftChild = node->split.leftChild;
PrintDetailedMapNode(json, leftChild, childrenNodeSize);
const Node* const rightChild = leftChild->buddy;
PrintDetailedMapNode(json, rightChild, childrenNodeSize);
}
break;
default:
VMA_ASSERT(0);
}
}
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_BLOCK_METADATA_BUDDY_FUNCTIONS
#endif // _VMA_BLOCK_METADATA_BUDDY
#endif // #if 0
#ifndef _VMA_BLOCK_METADATA_TLSF
// To not search current larger region if first allocation won't succeed and skip to smaller range
// use with VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT as strategy in CreateAllocationRequest().
// When fragmentation and reusal of previous blocks doesn't matter then use with
// VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT for fastest alloc time possible.
class VmaBlockMetadata_TLSF : public VmaBlockMetadata
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_TLSF)
public:
VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual);
virtual ~VmaBlockMetadata_TLSF();
size_t GetAllocationCount() const override { return m_AllocCount; }
size_t GetFreeRegionsCount() const override { return m_BlocksFreeCount + 1; }
VkDeviceSize GetSumFreeSize() const override { return m_BlocksFreeSize + m_NullBlock->size; }
bool IsEmpty() const override { return m_NullBlock->offset == 0; }
VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return ((Block*)allocHandle)->offset; }
void Init(VkDeviceSize size) override;
bool Validate() const override;
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
void AddStatistics(VmaStatistics& inoutStats) const override;
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json) const override;
#endif
bool CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest) override;
VkResult CheckCorruption(const void* pBlockData) override;
void Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData) override;
void Free(VmaAllocHandle allocHandle) override;
void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
VmaAllocHandle GetAllocationListBegin() const override;
VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
void Clear() override;
void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
void DebugLogAllAllocations() const override;
private:
// According to original paper it should be preferable 4 or 5:
// M. Masmano, I. Ripoll, A. Crespo, and J. Real "TLSF: a New Dynamic Memory Allocator for Real-Time Systems"
// http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf
static const uint8_t SECOND_LEVEL_INDEX = 5;
static const uint16_t SMALL_BUFFER_SIZE = 256;
static const uint32_t INITIAL_BLOCK_ALLOC_COUNT = 16;
static const uint8_t MEMORY_CLASS_SHIFT = 7;
static const uint8_t MAX_MEMORY_CLASSES = 65 - MEMORY_CLASS_SHIFT;
class Block
{
public:
VkDeviceSize offset;
VkDeviceSize size;
Block* prevPhysical;
Block* nextPhysical;
void MarkFree() { prevFree = VMA_NULL; }
void MarkTaken() { prevFree = this; }
bool IsFree() const { return prevFree != this; }
void*& UserData() { VMA_HEAVY_ASSERT(!IsFree()); return userData; }
Block*& PrevFree() { return prevFree; }
Block*& NextFree() { VMA_HEAVY_ASSERT(IsFree()); return nextFree; }
private:
Block* prevFree; // Address of the same block here indicates that block is taken
union
{
Block* nextFree;
void* userData;
};
};
size_t m_AllocCount;
// Total number of free blocks besides null block
size_t m_BlocksFreeCount;
// Total size of free blocks excluding null block
VkDeviceSize m_BlocksFreeSize;
uint32_t m_IsFreeBitmap;
uint8_t m_MemoryClasses;
uint32_t m_InnerIsFreeBitmap[MAX_MEMORY_CLASSES];
uint32_t m_ListsCount;
/*
* 0: 0-3 lists for small buffers
* 1+: 0-(2^SLI-1) lists for normal buffers
*/
Block** m_FreeList;
VmaPoolAllocator<Block> m_BlockAllocator;
Block* m_NullBlock;
VmaBlockBufferImageGranularity m_GranularityHandler;
uint8_t SizeToMemoryClass(VkDeviceSize size) const;
uint16_t SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const;
uint32_t GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const;
uint32_t GetListIndex(VkDeviceSize size) const;
void RemoveFreeBlock(Block* block);
void InsertFreeBlock(Block* block);
void MergeBlock(Block* block, Block* prev);
Block* FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const;
bool CheckBlock(
Block& block,
uint32_t listIndex,
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
VmaAllocationRequest* pAllocationRequest);
};
#ifndef _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
VmaBlockMetadata_TLSF::VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
VkDeviceSize bufferImageGranularity, bool isVirtual)
: VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
m_AllocCount(0),
m_BlocksFreeCount(0),
m_BlocksFreeSize(0),
m_IsFreeBitmap(0),
m_MemoryClasses(0),
m_ListsCount(0),
m_FreeList(VMA_NULL),
m_BlockAllocator(pAllocationCallbacks, INITIAL_BLOCK_ALLOC_COUNT),
m_NullBlock(VMA_NULL),
m_GranularityHandler(bufferImageGranularity) {}
VmaBlockMetadata_TLSF::~VmaBlockMetadata_TLSF()
{
if (m_FreeList)
vma_delete_array(GetAllocationCallbacks(), m_FreeList, m_ListsCount);
m_GranularityHandler.Destroy(GetAllocationCallbacks());
}
void VmaBlockMetadata_TLSF::Init(VkDeviceSize size)
{
VmaBlockMetadata::Init(size);
if (!IsVirtual())
m_GranularityHandler.Init(GetAllocationCallbacks(), size);
m_NullBlock = m_BlockAllocator.Alloc();
m_NullBlock->size = size;
m_NullBlock->offset = 0;
m_NullBlock->prevPhysical = VMA_NULL;
m_NullBlock->nextPhysical = VMA_NULL;
m_NullBlock->MarkFree();
m_NullBlock->NextFree() = VMA_NULL;
m_NullBlock->PrevFree() = VMA_NULL;
uint8_t memoryClass = SizeToMemoryClass(size);
uint16_t sli = SizeToSecondIndex(size, memoryClass);
m_ListsCount = (memoryClass == 0 ? 0 : (memoryClass - 1) * (1UL << SECOND_LEVEL_INDEX) + sli) + 1;
if (IsVirtual())
m_ListsCount += 1UL << SECOND_LEVEL_INDEX;
else
m_ListsCount += 4;
m_MemoryClasses = memoryClass + uint8_t(2);
memset(m_InnerIsFreeBitmap, 0, MAX_MEMORY_CLASSES * sizeof(uint32_t));
m_FreeList = vma_new_array(GetAllocationCallbacks(), Block*, m_ListsCount);
memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
}
bool VmaBlockMetadata_TLSF::Validate() const
{
VMA_VALIDATE(GetSumFreeSize() <= GetSize());
VkDeviceSize calculatedSize = m_NullBlock->size;
VkDeviceSize calculatedFreeSize = m_NullBlock->size;
size_t allocCount = 0;
size_t freeCount = 0;
// Check integrity of free lists
for (uint32_t list = 0; list < m_ListsCount; ++list)
{
Block* block = m_FreeList[list];
if (block != VMA_NULL)
{
VMA_VALIDATE(block->IsFree());
VMA_VALIDATE(block->PrevFree() == VMA_NULL);
while (block->NextFree())
{
VMA_VALIDATE(block->NextFree()->IsFree());
VMA_VALIDATE(block->NextFree()->PrevFree() == block);
block = block->NextFree();
}
}
}
VkDeviceSize nextOffset = m_NullBlock->offset;
auto validateCtx = m_GranularityHandler.StartValidation(GetAllocationCallbacks(), IsVirtual());
VMA_VALIDATE(m_NullBlock->nextPhysical == VMA_NULL);
if (m_NullBlock->prevPhysical)
{
VMA_VALIDATE(m_NullBlock->prevPhysical->nextPhysical == m_NullBlock);
}
// Check all blocks
for (Block* prev = m_NullBlock->prevPhysical; prev != VMA_NULL; prev = prev->prevPhysical)
{
VMA_VALIDATE(prev->offset + prev->size == nextOffset);
nextOffset = prev->offset;
calculatedSize += prev->size;
uint32_t listIndex = GetListIndex(prev->size);
if (prev->IsFree())
{
++freeCount;
// Check if free block belongs to free list
Block* freeBlock = m_FreeList[listIndex];
VMA_VALIDATE(freeBlock != VMA_NULL);
bool found = false;
do
{
if (freeBlock == prev)
found = true;
freeBlock = freeBlock->NextFree();
} while (!found && freeBlock != VMA_NULL);
VMA_VALIDATE(found);
calculatedFreeSize += prev->size;
}
else
{
++allocCount;
// Check if taken block is not on a free list
Block* freeBlock = m_FreeList[listIndex];
while (freeBlock)
{
VMA_VALIDATE(freeBlock != prev);
freeBlock = freeBlock->NextFree();
}
if (!IsVirtual())
{
VMA_VALIDATE(m_GranularityHandler.Validate(validateCtx, prev->offset, prev->size));
}
}
if (prev->prevPhysical)
{
VMA_VALIDATE(prev->prevPhysical->nextPhysical == prev);
}
}
if (!IsVirtual())
{
VMA_VALIDATE(m_GranularityHandler.FinishValidation(validateCtx));
}
VMA_VALIDATE(nextOffset == 0);
VMA_VALIDATE(calculatedSize == GetSize());
VMA_VALIDATE(calculatedFreeSize == GetSumFreeSize());
VMA_VALIDATE(allocCount == m_AllocCount);
VMA_VALIDATE(freeCount == m_BlocksFreeCount);
return true;
}
void VmaBlockMetadata_TLSF::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
{
inoutStats.statistics.blockCount++;
inoutStats.statistics.blockBytes += GetSize();
if (m_NullBlock->size > 0)
VmaAddDetailedStatisticsUnusedRange(inoutStats, m_NullBlock->size);
for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
{
if (block->IsFree())
VmaAddDetailedStatisticsUnusedRange(inoutStats, block->size);
else
VmaAddDetailedStatisticsAllocation(inoutStats, block->size);
}
}
void VmaBlockMetadata_TLSF::AddStatistics(VmaStatistics& inoutStats) const
{
inoutStats.blockCount++;
inoutStats.allocationCount += (uint32_t)m_AllocCount;
inoutStats.blockBytes += GetSize();
inoutStats.allocationBytes += GetSize() - GetSumFreeSize();
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockMetadata_TLSF::PrintDetailedMap(class VmaJsonWriter& json) const
{
size_t blockCount = m_AllocCount + m_BlocksFreeCount;
VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
VmaVector<Block*, VmaStlAllocator<Block*>> blockList(blockCount, allocator);
size_t i = blockCount;
for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
{
blockList[--i] = block;
}
VMA_ASSERT(i == 0);
VmaDetailedStatistics stats;
VmaClearDetailedStatistics(stats);
AddDetailedStatistics(stats);
PrintDetailedMap_Begin(json,
stats.statistics.blockBytes - stats.statistics.allocationBytes,
stats.statistics.allocationCount,
stats.unusedRangeCount);
for (; i < blockCount; ++i)
{
Block* block = blockList[i];
if (block->IsFree())
PrintDetailedMap_UnusedRange(json, block->offset, block->size);
else
PrintDetailedMap_Allocation(json, block->offset, block->size, block->UserData());
}
if (m_NullBlock->size > 0)
PrintDetailedMap_UnusedRange(json, m_NullBlock->offset, m_NullBlock->size);
PrintDetailedMap_End(json);
}
#endif
bool VmaBlockMetadata_TLSF::CreateAllocationRequest(
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
bool upperAddress,
VmaSuballocationType allocType,
uint32_t strategy,
VmaAllocationRequest* pAllocationRequest)
{
VMA_ASSERT(allocSize > 0 && "Cannot allocate empty block!");
VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
// For small granularity round up
if (!IsVirtual())
m_GranularityHandler.RoundupAllocRequest(allocType, allocSize, allocAlignment);
allocSize += GetDebugMargin();
// Quick check for too small pool
if (allocSize > GetSumFreeSize())
return false;
// If no free blocks in pool then check only null block
if (m_BlocksFreeCount == 0)
return CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest);
// Round up to the next block
VkDeviceSize sizeForNextList = allocSize;
VkDeviceSize smallSizeStep = VkDeviceSize(SMALL_BUFFER_SIZE / (IsVirtual() ? 1 << SECOND_LEVEL_INDEX : 4));
if (allocSize > SMALL_BUFFER_SIZE)
{
sizeForNextList += (1ULL << (VMA_BITSCAN_MSB(allocSize) - SECOND_LEVEL_INDEX));
}
else if (allocSize > SMALL_BUFFER_SIZE - smallSizeStep)
sizeForNextList = SMALL_BUFFER_SIZE + 1;
else
sizeForNextList += smallSizeStep;
uint32_t nextListIndex = m_ListsCount;
uint32_t prevListIndex = m_ListsCount;
Block* nextListBlock = VMA_NULL;
Block* prevListBlock = VMA_NULL;
// Check blocks according to strategies
if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT)
{
// Quick check for larger block first
nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
if (nextListBlock != VMA_NULL && CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
// If not fitted then null block
if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
// Null block failed, search larger bucket
while (nextListBlock)
{
if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
nextListBlock = nextListBlock->NextFree();
}
// Failed again, check best fit bucket
prevListBlock = FindFreeBlock(allocSize, prevListIndex);
while (prevListBlock)
{
if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
prevListBlock = prevListBlock->NextFree();
}
}
else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
{
// Check best fit bucket
prevListBlock = FindFreeBlock(allocSize, prevListIndex);
while (prevListBlock)
{
if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
prevListBlock = prevListBlock->NextFree();
}
// If failed check null block
if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
// Check larger bucket
nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
while (nextListBlock)
{
if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
nextListBlock = nextListBlock->NextFree();
}
}
else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT )
{
// Perform search from the start
VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
VmaVector<Block*, VmaStlAllocator<Block*>> blockList(m_BlocksFreeCount, allocator);
size_t i = m_BlocksFreeCount;
for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
{
if (block->IsFree() && block->size >= allocSize)
blockList[--i] = block;
}
for (; i < m_BlocksFreeCount; ++i)
{
Block& block = *blockList[i];
if (CheckBlock(block, GetListIndex(block.size), allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
}
// If failed check null block
if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
// Whole range searched, no more memory
return false;
}
else
{
// Check larger bucket
nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
while (nextListBlock)
{
if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
nextListBlock = nextListBlock->NextFree();
}
// If failed check null block
if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
// Check best fit bucket
prevListBlock = FindFreeBlock(allocSize, prevListIndex);
while (prevListBlock)
{
if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
prevListBlock = prevListBlock->NextFree();
}
}
// Worst case, full search has to be done
while (++nextListIndex < m_ListsCount)
{
nextListBlock = m_FreeList[nextListIndex];
while (nextListBlock)
{
if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
return true;
nextListBlock = nextListBlock->NextFree();
}
}
// No more memory sadly
return false;
}
VkResult VmaBlockMetadata_TLSF::CheckCorruption(const void* pBlockData)
{
for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
{
if (!block->IsFree())
{
if (!VmaValidateMagicValue(pBlockData, block->offset + block->size))
{
VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
return VK_ERROR_UNKNOWN_COPY;
}
}
}
return VK_SUCCESS;
}
void VmaBlockMetadata_TLSF::Alloc(
const VmaAllocationRequest& request,
VmaSuballocationType type,
void* userData)
{
VMA_ASSERT(request.type == VmaAllocationRequestType::TLSF);
// Get block and pop it from the free list
Block* currentBlock = (Block*)request.allocHandle;
VkDeviceSize offset = request.algorithmData;
VMA_ASSERT(currentBlock != VMA_NULL);
VMA_ASSERT(currentBlock->offset <= offset);
if (currentBlock != m_NullBlock)
RemoveFreeBlock(currentBlock);
VkDeviceSize debugMargin = GetDebugMargin();
VkDeviceSize misssingAlignment = offset - currentBlock->offset;
// Append missing alignment to prev block or create new one
if (misssingAlignment)
{
Block* prevBlock = currentBlock->prevPhysical;
VMA_ASSERT(prevBlock != VMA_NULL && "There should be no missing alignment at offset 0!");
if (prevBlock->IsFree() && prevBlock->size != debugMargin)
{
uint32_t oldList = GetListIndex(prevBlock->size);
prevBlock->size += misssingAlignment;
// Check if new size crosses list bucket
if (oldList != GetListIndex(prevBlock->size))
{
prevBlock->size -= misssingAlignment;
RemoveFreeBlock(prevBlock);
prevBlock->size += misssingAlignment;
InsertFreeBlock(prevBlock);
}
else
m_BlocksFreeSize += misssingAlignment;
}
else
{
Block* newBlock = m_BlockAllocator.Alloc();
currentBlock->prevPhysical = newBlock;
prevBlock->nextPhysical = newBlock;
newBlock->prevPhysical = prevBlock;
newBlock->nextPhysical = currentBlock;
newBlock->size = misssingAlignment;
newBlock->offset = currentBlock->offset;
newBlock->MarkTaken();
InsertFreeBlock(newBlock);
}
currentBlock->size -= misssingAlignment;
currentBlock->offset += misssingAlignment;
}
VkDeviceSize size = request.size + debugMargin;
if (currentBlock->size == size)
{
if (currentBlock == m_NullBlock)
{
// Setup new null block
m_NullBlock = m_BlockAllocator.Alloc();
m_NullBlock->size = 0;
m_NullBlock->offset = currentBlock->offset + size;
m_NullBlock->prevPhysical = currentBlock;
m_NullBlock->nextPhysical = VMA_NULL;
m_NullBlock->MarkFree();
m_NullBlock->PrevFree() = VMA_NULL;
m_NullBlock->NextFree() = VMA_NULL;
currentBlock->nextPhysical = m_NullBlock;
currentBlock->MarkTaken();
}
}
else
{
VMA_ASSERT(currentBlock->size > size && "Proper block already found, shouldn't find smaller one!");
// Create new free block
Block* newBlock = m_BlockAllocator.Alloc();
newBlock->size = currentBlock->size - size;
newBlock->offset = currentBlock->offset + size;
newBlock->prevPhysical = currentBlock;
newBlock->nextPhysical = currentBlock->nextPhysical;
currentBlock->nextPhysical = newBlock;
currentBlock->size = size;
if (currentBlock == m_NullBlock)
{
m_NullBlock = newBlock;
m_NullBlock->MarkFree();
m_NullBlock->NextFree() = VMA_NULL;
m_NullBlock->PrevFree() = VMA_NULL;
currentBlock->MarkTaken();
}
else
{
newBlock->nextPhysical->prevPhysical = newBlock;
newBlock->MarkTaken();
InsertFreeBlock(newBlock);
}
}
currentBlock->UserData() = userData;
if (debugMargin > 0)
{
currentBlock->size -= debugMargin;
Block* newBlock = m_BlockAllocator.Alloc();
newBlock->size = debugMargin;
newBlock->offset = currentBlock->offset + currentBlock->size;
newBlock->prevPhysical = currentBlock;
newBlock->nextPhysical = currentBlock->nextPhysical;
newBlock->MarkTaken();
currentBlock->nextPhysical->prevPhysical = newBlock;
currentBlock->nextPhysical = newBlock;
InsertFreeBlock(newBlock);
}
if (!IsVirtual())
m_GranularityHandler.AllocPages((uint8_t)(uintptr_t)request.customData,
currentBlock->offset, currentBlock->size);
++m_AllocCount;
}
void VmaBlockMetadata_TLSF::Free(VmaAllocHandle allocHandle)
{
Block* block = (Block*)allocHandle;
Block* next = block->nextPhysical;
VMA_ASSERT(!block->IsFree() && "Block is already free!");
if (!IsVirtual())
m_GranularityHandler.FreePages(block->offset, block->size);
--m_AllocCount;
VkDeviceSize debugMargin = GetDebugMargin();
if (debugMargin > 0)
{
RemoveFreeBlock(next);
MergeBlock(next, block);
block = next;
next = next->nextPhysical;
}
// Try merging
Block* prev = block->prevPhysical;
if (prev != VMA_NULL && prev->IsFree() && prev->size != debugMargin)
{
RemoveFreeBlock(prev);
MergeBlock(block, prev);
}
if (!next->IsFree())
InsertFreeBlock(block);
else if (next == m_NullBlock)
MergeBlock(m_NullBlock, block);
else
{
RemoveFreeBlock(next);
MergeBlock(next, block);
InsertFreeBlock(next);
}
}
void VmaBlockMetadata_TLSF::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
{
Block* block = (Block*)allocHandle;
VMA_ASSERT(!block->IsFree() && "Cannot get allocation info for free block!");
outInfo.offset = block->offset;
outInfo.size = block->size;
outInfo.pUserData = block->UserData();
}
void* VmaBlockMetadata_TLSF::GetAllocationUserData(VmaAllocHandle allocHandle) const
{
Block* block = (Block*)allocHandle;
VMA_ASSERT(!block->IsFree() && "Cannot get user data for free block!");
return block->UserData();
}
VmaAllocHandle VmaBlockMetadata_TLSF::GetAllocationListBegin() const
{
if (m_AllocCount == 0)
return VK_NULL_HANDLE;
for (Block* block = m_NullBlock->prevPhysical; block; block = block->prevPhysical)
{
if (!block->IsFree())
return (VmaAllocHandle)block;
}
VMA_ASSERT(false && "If m_AllocCount > 0 then should find any allocation!");
return VK_NULL_HANDLE;
}
VmaAllocHandle VmaBlockMetadata_TLSF::GetNextAllocation(VmaAllocHandle prevAlloc) const
{
Block* startBlock = (Block*)prevAlloc;
VMA_ASSERT(!startBlock->IsFree() && "Incorrect block!");
for (Block* block = startBlock->prevPhysical; block; block = block->prevPhysical)
{
if (!block->IsFree())
return (VmaAllocHandle)block;
}
return VK_NULL_HANDLE;
}
VkDeviceSize VmaBlockMetadata_TLSF::GetNextFreeRegionSize(VmaAllocHandle alloc) const
{
Block* block = (Block*)alloc;
VMA_ASSERT(!block->IsFree() && "Incorrect block!");
if (block->prevPhysical)
return block->prevPhysical->IsFree() ? block->prevPhysical->size : 0;
return 0;
}
void VmaBlockMetadata_TLSF::Clear()
{
m_AllocCount = 0;
m_BlocksFreeCount = 0;
m_BlocksFreeSize = 0;
m_IsFreeBitmap = 0;
m_NullBlock->offset = 0;
m_NullBlock->size = GetSize();
Block* block = m_NullBlock->prevPhysical;
m_NullBlock->prevPhysical = VMA_NULL;
while (block)
{
Block* prev = block->prevPhysical;
m_BlockAllocator.Free(block);
block = prev;
}
memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
memset(m_InnerIsFreeBitmap, 0, m_MemoryClasses * sizeof(uint32_t));
m_GranularityHandler.Clear();
}
void VmaBlockMetadata_TLSF::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
{
Block* block = (Block*)allocHandle;
VMA_ASSERT(!block->IsFree() && "Trying to set user data for not allocated block!");
block->UserData() = userData;
}
void VmaBlockMetadata_TLSF::DebugLogAllAllocations() const
{
for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
if (!block->IsFree())
DebugLogAllocation(block->offset, block->size, block->UserData());
}
uint8_t VmaBlockMetadata_TLSF::SizeToMemoryClass(VkDeviceSize size) const
{
if (size > SMALL_BUFFER_SIZE)
return uint8_t(VMA_BITSCAN_MSB(size) - MEMORY_CLASS_SHIFT);
return 0;
}
uint16_t VmaBlockMetadata_TLSF::SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const
{
if (memoryClass == 0)
{
if (IsVirtual())
return static_cast<uint16_t>((size - 1) / 8);
else
return static_cast<uint16_t>((size - 1) / 64);
}
return static_cast<uint16_t>((size >> (memoryClass + MEMORY_CLASS_SHIFT - SECOND_LEVEL_INDEX)) ^ (1U << SECOND_LEVEL_INDEX));
}
uint32_t VmaBlockMetadata_TLSF::GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const
{
if (memoryClass == 0)
return secondIndex;
const uint32_t index = static_cast<uint32_t>(memoryClass - 1) * (1 << SECOND_LEVEL_INDEX) + secondIndex;
if (IsVirtual())
return index + (1 << SECOND_LEVEL_INDEX);
else
return index + 4;
}
uint32_t VmaBlockMetadata_TLSF::GetListIndex(VkDeviceSize size) const
{
uint8_t memoryClass = SizeToMemoryClass(size);
return GetListIndex(memoryClass, SizeToSecondIndex(size, memoryClass));
}
void VmaBlockMetadata_TLSF::RemoveFreeBlock(Block* block)
{
VMA_ASSERT(block != m_NullBlock);
VMA_ASSERT(block->IsFree());
if (block->NextFree() != VMA_NULL)
block->NextFree()->PrevFree() = block->PrevFree();
if (block->PrevFree() != VMA_NULL)
block->PrevFree()->NextFree() = block->NextFree();
else
{
uint8_t memClass = SizeToMemoryClass(block->size);
uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
uint32_t index = GetListIndex(memClass, secondIndex);
VMA_ASSERT(m_FreeList[index] == block);
m_FreeList[index] = block->NextFree();
if (block->NextFree() == VMA_NULL)
{
m_InnerIsFreeBitmap[memClass] &= ~(1U << secondIndex);
if (m_InnerIsFreeBitmap[memClass] == 0)
m_IsFreeBitmap &= ~(1UL << memClass);
}
}
block->MarkTaken();
block->UserData() = VMA_NULL;
--m_BlocksFreeCount;
m_BlocksFreeSize -= block->size;
}
void VmaBlockMetadata_TLSF::InsertFreeBlock(Block* block)
{
VMA_ASSERT(block != m_NullBlock);
VMA_ASSERT(!block->IsFree() && "Cannot insert block twice!");
uint8_t memClass = SizeToMemoryClass(block->size);
uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
uint32_t index = GetListIndex(memClass, secondIndex);
VMA_ASSERT(index < m_ListsCount);
block->PrevFree() = VMA_NULL;
block->NextFree() = m_FreeList[index];
m_FreeList[index] = block;
if (block->NextFree() != VMA_NULL)
block->NextFree()->PrevFree() = block;
else
{
m_InnerIsFreeBitmap[memClass] |= 1U << secondIndex;
m_IsFreeBitmap |= 1UL << memClass;
}
++m_BlocksFreeCount;
m_BlocksFreeSize += block->size;
}
void VmaBlockMetadata_TLSF::MergeBlock(Block* block, Block* prev)
{
VMA_ASSERT(block->prevPhysical == prev && "Cannot merge separate physical regions!");
VMA_ASSERT(!prev->IsFree() && "Cannot merge block that belongs to free list!");
block->offset = prev->offset;
block->size += prev->size;
block->prevPhysical = prev->prevPhysical;
if (block->prevPhysical)
block->prevPhysical->nextPhysical = block;
m_BlockAllocator.Free(prev);
}
VmaBlockMetadata_TLSF::Block* VmaBlockMetadata_TLSF::FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const
{
uint8_t memoryClass = SizeToMemoryClass(size);
uint32_t innerFreeMap = m_InnerIsFreeBitmap[memoryClass] & (~0U << SizeToSecondIndex(size, memoryClass));
if (!innerFreeMap)
{
// Check higher levels for available blocks
uint32_t freeMap = m_IsFreeBitmap & (~0UL << (memoryClass + 1));
if (!freeMap)
return VMA_NULL; // No more memory available
// Find lowest free region
memoryClass = VMA_BITSCAN_LSB(freeMap);
innerFreeMap = m_InnerIsFreeBitmap[memoryClass];
VMA_ASSERT(innerFreeMap != 0);
}
// Find lowest free subregion
listIndex = GetListIndex(memoryClass, VMA_BITSCAN_LSB(innerFreeMap));
VMA_ASSERT(m_FreeList[listIndex]);
return m_FreeList[listIndex];
}
bool VmaBlockMetadata_TLSF::CheckBlock(
Block& block,
uint32_t listIndex,
VkDeviceSize allocSize,
VkDeviceSize allocAlignment,
VmaSuballocationType allocType,
VmaAllocationRequest* pAllocationRequest)
{
VMA_ASSERT(block.IsFree() && "Block is already taken!");
VkDeviceSize alignedOffset = VmaAlignUp(block.offset, allocAlignment);
if (block.size < allocSize + alignedOffset - block.offset)
return false;
// Check for granularity conflicts
if (!IsVirtual() &&
m_GranularityHandler.CheckConflictAndAlignUp(alignedOffset, allocSize, block.offset, block.size, allocType))
return false;
// Alloc successful
pAllocationRequest->type = VmaAllocationRequestType::TLSF;
pAllocationRequest->allocHandle = (VmaAllocHandle)█
pAllocationRequest->size = allocSize - GetDebugMargin();
pAllocationRequest->customData = (void*)allocType;
pAllocationRequest->algorithmData = alignedOffset;
// Place block at the start of list if it's normal block
if (listIndex != m_ListsCount && block.PrevFree())
{
block.PrevFree()->NextFree() = block.NextFree();
if (block.NextFree())
block.NextFree()->PrevFree() = block.PrevFree();
block.PrevFree() = VMA_NULL;
block.NextFree() = m_FreeList[listIndex];
m_FreeList[listIndex] = █
if (block.NextFree())
block.NextFree()->PrevFree() = █
}
return true;
}
#endif // _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
#endif // _VMA_BLOCK_METADATA_TLSF
#ifndef _VMA_BLOCK_VECTOR
/*
Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
Vulkan memory type.
Synchronized internally with a mutex.
*/
class VmaBlockVector
{
friend struct VmaDefragmentationContext_T;
VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockVector)
public:
VmaBlockVector(
VmaAllocator hAllocator,
VmaPool hParentPool,
uint32_t memoryTypeIndex,
VkDeviceSize preferredBlockSize,
size_t minBlockCount,
size_t maxBlockCount,
VkDeviceSize bufferImageGranularity,
bool explicitBlockSize,
uint32_t algorithm,
float priority,
VkDeviceSize minAllocationAlignment,
void* pMemoryAllocateNext);
~VmaBlockVector();
VmaAllocator GetAllocator() const { return m_hAllocator; }
VmaPool GetParentPool() const { return m_hParentPool; }
bool IsCustomPool() const { return m_hParentPool != VMA_NULL; }
uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
uint32_t GetAlgorithm() const { return m_Algorithm; }
bool HasExplicitBlockSize() const { return m_ExplicitBlockSize; }
float GetPriority() const { return m_Priority; }
const void* GetAllocationNextPtr() const { return m_pMemoryAllocateNext; }
// To be used only while the m_Mutex is locked. Used during defragmentation.
size_t GetBlockCount() const { return m_Blocks.size(); }
// To be used only while the m_Mutex is locked. Used during defragmentation.
VmaDeviceMemoryBlock* GetBlock(size_t index) const { return m_Blocks[index]; }
VMA_RW_MUTEX &GetMutex() { return m_Mutex; }
VkResult CreateMinBlocks();
void AddStatistics(VmaStatistics& inoutStats);
void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
bool IsEmpty();
bool IsCorruptionDetectionEnabled() const;
VkResult Allocate(
VkDeviceSize size,
VkDeviceSize alignment,
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
size_t allocationCount,
VmaAllocation* pAllocations);
void Free(const VmaAllocation hAllocation);
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json);
#endif
VkResult CheckCorruption();
private:
const VmaAllocator m_hAllocator;
const VmaPool m_hParentPool;
const uint32_t m_MemoryTypeIndex;
const VkDeviceSize m_PreferredBlockSize;
const size_t m_MinBlockCount;
const size_t m_MaxBlockCount;
const VkDeviceSize m_BufferImageGranularity;
const bool m_ExplicitBlockSize;
const uint32_t m_Algorithm;
const float m_Priority;
const VkDeviceSize m_MinAllocationAlignment;
void* const m_pMemoryAllocateNext;
VMA_RW_MUTEX m_Mutex;
// Incrementally sorted by sumFreeSize, ascending.
VmaVector<VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*>> m_Blocks;
uint32_t m_NextBlockId;
bool m_IncrementalSort = true;
void SetIncrementalSort(bool val) { m_IncrementalSort = val; }
VkDeviceSize CalcMaxBlockSize() const;
// Finds and removes given block from vector.
void Remove(VmaDeviceMemoryBlock* pBlock);
// Performs single step in sorting m_Blocks. They may not be fully sorted
// after this call.
void IncrementallySortBlocks();
void SortByFreeSize();
VkResult AllocatePage(
VkDeviceSize size,
VkDeviceSize alignment,
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
VmaAllocation* pAllocation);
VkResult AllocateFromBlock(
VmaDeviceMemoryBlock* pBlock,
VkDeviceSize size,
VkDeviceSize alignment,
VmaAllocationCreateFlags allocFlags,
void* pUserData,
VmaSuballocationType suballocType,
uint32_t strategy,
VmaAllocation* pAllocation);
VkResult CommitAllocationRequest(
VmaAllocationRequest& allocRequest,
VmaDeviceMemoryBlock* pBlock,
VkDeviceSize alignment,
VmaAllocationCreateFlags allocFlags,
void* pUserData,
VmaSuballocationType suballocType,
VmaAllocation* pAllocation);
VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
bool HasEmptyBlock();
};
#endif // _VMA_BLOCK_VECTOR
#ifndef _VMA_DEFRAGMENTATION_CONTEXT
struct VmaDefragmentationContext_T
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaDefragmentationContext_T)
public:
VmaDefragmentationContext_T(
VmaAllocator hAllocator,
const VmaDefragmentationInfo& info);
~VmaDefragmentationContext_T();
void GetStats(VmaDefragmentationStats& outStats) { outStats = m_GlobalStats; }
VkResult DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo);
VkResult DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo);
private:
// Max number of allocations to ignore due to size constraints before ending single pass
static const uint8_t MAX_ALLOCS_TO_IGNORE = 16;
enum class CounterStatus { Pass, Ignore, End };
struct FragmentedBlock
{
uint32_t data;
VmaDeviceMemoryBlock* block;
};
struct StateBalanced
{
VkDeviceSize avgFreeSize = 0;
VkDeviceSize avgAllocSize = UINT64_MAX;
};
struct StateExtensive
{
enum class Operation : uint8_t
{
FindFreeBlockBuffer, FindFreeBlockTexture, FindFreeBlockAll,
MoveBuffers, MoveTextures, MoveAll,
Cleanup, Done
};
Operation operation = Operation::FindFreeBlockTexture;
size_t firstFreeBlock = SIZE_MAX;
};
struct MoveAllocationData
{
VkDeviceSize size;
VkDeviceSize alignment;
VmaSuballocationType type;
VmaAllocationCreateFlags flags;
VmaDefragmentationMove move = {};
};
const VkDeviceSize m_MaxPassBytes;
const uint32_t m_MaxPassAllocations;
const PFN_vmaCheckDefragmentationBreakFunction m_BreakCallback;
void* m_BreakCallbackUserData;
VmaStlAllocator<VmaDefragmentationMove> m_MoveAllocator;
VmaVector<VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove>> m_Moves;
uint8_t m_IgnoredAllocs = 0;
uint32_t m_Algorithm;
uint32_t m_BlockVectorCount;
VmaBlockVector* m_PoolBlockVector;
VmaBlockVector** m_pBlockVectors;
size_t m_ImmovableBlockCount = 0;
VmaDefragmentationStats m_GlobalStats = { 0 };
VmaDefragmentationStats m_PassStats = { 0 };
void* m_AlgorithmState = VMA_NULL;
static MoveAllocationData GetMoveData(VmaAllocHandle handle, VmaBlockMetadata* metadata);
CounterStatus CheckCounters(VkDeviceSize bytes);
bool IncrementCounters(VkDeviceSize bytes);
bool ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block);
bool AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector);
bool ComputeDefragmentation(VmaBlockVector& vector, size_t index);
bool ComputeDefragmentation_Fast(VmaBlockVector& vector);
bool ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update);
bool ComputeDefragmentation_Full(VmaBlockVector& vector);
bool ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index);
void UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state);
bool MoveDataToFreeBlocks(VmaSuballocationType currentType,
VmaBlockVector& vector, size_t firstFreeBlock,
bool& texturePresent, bool& bufferPresent, bool& otherPresent);
};
#endif // _VMA_DEFRAGMENTATION_CONTEXT
#ifndef _VMA_POOL_T
struct VmaPool_T
{
friend struct VmaPoolListItemTraits;
VMA_CLASS_NO_COPY_NO_MOVE(VmaPool_T)
public:
VmaBlockVector m_BlockVector;
VmaDedicatedAllocationList m_DedicatedAllocations;
VmaPool_T(
VmaAllocator hAllocator,
const VmaPoolCreateInfo& createInfo,
VkDeviceSize preferredBlockSize);
~VmaPool_T();
uint32_t GetId() const { return m_Id; }
void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
const char* GetName() const { return m_Name; }
void SetName(const char* pName);
#if VMA_STATS_STRING_ENABLED
//void PrintDetailedMap(class VmaStringBuilder& sb);
#endif
private:
uint32_t m_Id;
char* m_Name;
VmaPool_T* m_PrevPool = VMA_NULL;
VmaPool_T* m_NextPool = VMA_NULL;
};
struct VmaPoolListItemTraits
{
typedef VmaPool_T ItemType;
static ItemType* GetPrev(const ItemType* item) { return item->m_PrevPool; }
static ItemType* GetNext(const ItemType* item) { return item->m_NextPool; }
static ItemType*& AccessPrev(ItemType* item) { return item->m_PrevPool; }
static ItemType*& AccessNext(ItemType* item) { return item->m_NextPool; }
};
#endif // _VMA_POOL_T
#ifndef _VMA_CURRENT_BUDGET_DATA
struct VmaCurrentBudgetData
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaCurrentBudgetData)
public:
VMA_ATOMIC_UINT32 m_BlockCount[VK_MAX_MEMORY_HEAPS];
VMA_ATOMIC_UINT32 m_AllocationCount[VK_MAX_MEMORY_HEAPS];
VMA_ATOMIC_UINT64 m_BlockBytes[VK_MAX_MEMORY_HEAPS];
VMA_ATOMIC_UINT64 m_AllocationBytes[VK_MAX_MEMORY_HEAPS];
#if VMA_MEMORY_BUDGET
VMA_ATOMIC_UINT32 m_OperationsSinceBudgetFetch;
VMA_RW_MUTEX m_BudgetMutex;
uint64_t m_VulkanUsage[VK_MAX_MEMORY_HEAPS];
uint64_t m_VulkanBudget[VK_MAX_MEMORY_HEAPS];
uint64_t m_BlockBytesAtBudgetFetch[VK_MAX_MEMORY_HEAPS];
#endif // VMA_MEMORY_BUDGET
VmaCurrentBudgetData();
void AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
void RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
};
#ifndef _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
VmaCurrentBudgetData::VmaCurrentBudgetData()
{
for (uint32_t heapIndex = 0; heapIndex < VK_MAX_MEMORY_HEAPS; ++heapIndex)
{
m_BlockCount[heapIndex] = 0;
m_AllocationCount[heapIndex] = 0;
m_BlockBytes[heapIndex] = 0;
m_AllocationBytes[heapIndex] = 0;
#if VMA_MEMORY_BUDGET
m_VulkanUsage[heapIndex] = 0;
m_VulkanBudget[heapIndex] = 0;
m_BlockBytesAtBudgetFetch[heapIndex] = 0;
#endif
}
#if VMA_MEMORY_BUDGET
m_OperationsSinceBudgetFetch = 0;
#endif
}
void VmaCurrentBudgetData::AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
{
m_AllocationBytes[heapIndex] += allocationSize;
++m_AllocationCount[heapIndex];
#if VMA_MEMORY_BUDGET
++m_OperationsSinceBudgetFetch;
#endif
}
void VmaCurrentBudgetData::RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
{
VMA_ASSERT(m_AllocationBytes[heapIndex] >= allocationSize);
m_AllocationBytes[heapIndex] -= allocationSize;
VMA_ASSERT(m_AllocationCount[heapIndex] > 0);
--m_AllocationCount[heapIndex];
#if VMA_MEMORY_BUDGET
++m_OperationsSinceBudgetFetch;
#endif
}
#endif // _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
#endif // _VMA_CURRENT_BUDGET_DATA
#ifndef _VMA_ALLOCATION_OBJECT_ALLOCATOR
/*
Thread-safe wrapper over VmaPoolAllocator free list, for allocation of VmaAllocation_T objects.
*/
class VmaAllocationObjectAllocator
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocationObjectAllocator)
public:
VmaAllocationObjectAllocator(const VkAllocationCallbacks* pAllocationCallbacks)
: m_Allocator(pAllocationCallbacks, 1024) {}
template<typename... Types> VmaAllocation Allocate(Types&&... args);
void Free(VmaAllocation hAlloc);
private:
VMA_MUTEX m_Mutex;
VmaPoolAllocator<VmaAllocation_T> m_Allocator;
};
template<typename... Types>
VmaAllocation VmaAllocationObjectAllocator::Allocate(Types&&... args)
{
VmaMutexLock mutexLock(m_Mutex);
return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
}
void VmaAllocationObjectAllocator::Free(VmaAllocation hAlloc)
{
VmaMutexLock mutexLock(m_Mutex);
m_Allocator.Free(hAlloc);
}
#endif // _VMA_ALLOCATION_OBJECT_ALLOCATOR
#ifndef _VMA_VIRTUAL_BLOCK_T
struct VmaVirtualBlock_T
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaVirtualBlock_T)
public:
const bool m_AllocationCallbacksSpecified;
const VkAllocationCallbacks m_AllocationCallbacks;
VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo);
~VmaVirtualBlock_T();
VkResult Init() { return VK_SUCCESS; }
bool IsEmpty() const { return m_Metadata->IsEmpty(); }
void Free(VmaVirtualAllocation allocation) { m_Metadata->Free((VmaAllocHandle)allocation); }
void SetAllocationUserData(VmaVirtualAllocation allocation, void* userData) { m_Metadata->SetAllocationUserData((VmaAllocHandle)allocation, userData); }
void Clear() { m_Metadata->Clear(); }
const VkAllocationCallbacks* GetAllocationCallbacks() const;
void GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo);
VkResult Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
VkDeviceSize* outOffset);
void GetStatistics(VmaStatistics& outStats) const;
void CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const;
#if VMA_STATS_STRING_ENABLED
void BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const;
#endif
private:
VmaBlockMetadata* m_Metadata;
};
#ifndef _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
VmaVirtualBlock_T::VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo)
: m_AllocationCallbacksSpecified(createInfo.pAllocationCallbacks != VMA_NULL),
m_AllocationCallbacks(createInfo.pAllocationCallbacks != VMA_NULL ? *createInfo.pAllocationCallbacks : VmaEmptyAllocationCallbacks)
{
const uint32_t algorithm = createInfo.flags & VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK;
switch (algorithm)
{
case 0:
m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
break;
case VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT:
m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_Linear)(VK_NULL_HANDLE, 1, true);
break;
default:
VMA_ASSERT(0);
m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
}
m_Metadata->Init(createInfo.size);
}
VmaVirtualBlock_T::~VmaVirtualBlock_T()
{
// Define macro VMA_DEBUG_LOG_FORMAT to receive the list of the unfreed allocations
if (!m_Metadata->IsEmpty())
m_Metadata->DebugLogAllAllocations();
// This is the most important assert in the entire library.
// Hitting it means you have some memory leak - unreleased virtual allocations.
VMA_ASSERT(m_Metadata->IsEmpty() && "Some virtual allocations were not freed before destruction of this virtual block!");
vma_delete(GetAllocationCallbacks(), m_Metadata);
}
const VkAllocationCallbacks* VmaVirtualBlock_T::GetAllocationCallbacks() const
{
return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
}
void VmaVirtualBlock_T::GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo)
{
m_Metadata->GetAllocationInfo((VmaAllocHandle)allocation, outInfo);
}
VkResult VmaVirtualBlock_T::Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
VkDeviceSize* outOffset)
{
VmaAllocationRequest request = {};
if (m_Metadata->CreateAllocationRequest(
createInfo.size, // allocSize
VMA_MAX(createInfo.alignment, (VkDeviceSize)1), // allocAlignment
(createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0, // upperAddress
VMA_SUBALLOCATION_TYPE_UNKNOWN, // allocType - unimportant
createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK, // strategy
&request))
{
m_Metadata->Alloc(request,
VMA_SUBALLOCATION_TYPE_UNKNOWN, // type - unimportant
createInfo.pUserData);
outAllocation = (VmaVirtualAllocation)request.allocHandle;
if(outOffset)
*outOffset = m_Metadata->GetAllocationOffset(request.allocHandle);
return VK_SUCCESS;
}
outAllocation = (VmaVirtualAllocation)VK_NULL_HANDLE;
if (outOffset)
*outOffset = UINT64_MAX;
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
void VmaVirtualBlock_T::GetStatistics(VmaStatistics& outStats) const
{
VmaClearStatistics(outStats);
m_Metadata->AddStatistics(outStats);
}
void VmaVirtualBlock_T::CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const
{
VmaClearDetailedStatistics(outStats);
m_Metadata->AddDetailedStatistics(outStats);
}
#if VMA_STATS_STRING_ENABLED
void VmaVirtualBlock_T::BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const
{
VmaJsonWriter json(GetAllocationCallbacks(), sb);
json.BeginObject();
VmaDetailedStatistics stats;
CalculateDetailedStatistics(stats);
json.WriteString("Stats");
VmaPrintDetailedStatistics(json, stats);
if (detailedMap)
{
json.WriteString("Details");
json.BeginObject();
m_Metadata->PrintDetailedMap(json);
json.EndObject();
}
json.EndObject();
}
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
#endif // _VMA_VIRTUAL_BLOCK_T
// Main allocator object.
struct VmaAllocator_T
{
VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocator_T)
public:
bool m_UseMutex;
uint32_t m_VulkanApiVersion;
bool m_UseKhrDedicatedAllocation; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
bool m_UseKhrBindMemory2; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
bool m_UseExtMemoryBudget;
bool m_UseAmdDeviceCoherentMemory;
bool m_UseKhrBufferDeviceAddress;
bool m_UseExtMemoryPriority;
VkDevice m_hDevice;
VkInstance m_hInstance;
bool m_AllocationCallbacksSpecified;
VkAllocationCallbacks m_AllocationCallbacks;
VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
VmaAllocationObjectAllocator m_AllocationObjectAllocator;
// Each bit (1 << i) is set if HeapSizeLimit is enabled for that heap, so cannot allocate more than the heap size.
uint32_t m_HeapSizeLimitMask;
VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
VkPhysicalDeviceMemoryProperties m_MemProps;
// Default pools.
VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
VmaDedicatedAllocationList m_DedicatedAllocations[VK_MAX_MEMORY_TYPES];
VmaCurrentBudgetData m_Budget;
VMA_ATOMIC_UINT32 m_DeviceMemoryCount; // Total number of VkDeviceMemory objects.
VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
VkResult Init(const VmaAllocatorCreateInfo* pCreateInfo);
~VmaAllocator_T();
const VkAllocationCallbacks* GetAllocationCallbacks() const
{
return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
}
const VmaVulkanFunctions& GetVulkanFunctions() const
{
return m_VulkanFunctions;
}
VkPhysicalDevice GetPhysicalDevice() const { return m_PhysicalDevice; }
VkDeviceSize GetBufferImageGranularity() const
{
return VMA_MAX(
static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
m_PhysicalDeviceProperties.limits.bufferImageGranularity);
}
uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
{
VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
}
// True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
{
return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
}
// Minimum alignment for all allocations in specific memory type.
VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
{
return IsMemoryTypeNonCoherent(memTypeIndex) ?
VMA_MAX((VkDeviceSize)VMA_MIN_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
(VkDeviceSize)VMA_MIN_ALIGNMENT;
}
bool IsIntegratedGpu() const
{
return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
}
uint32_t GetGlobalMemoryTypeBits() const { return m_GlobalMemoryTypeBits; }
void GetBufferMemoryRequirements(
VkBuffer hBuffer,
VkMemoryRequirements& memReq,
bool& requiresDedicatedAllocation,
bool& prefersDedicatedAllocation) const;
void GetImageMemoryRequirements(
VkImage hImage,
VkMemoryRequirements& memReq,
bool& requiresDedicatedAllocation,
bool& prefersDedicatedAllocation) const;
VkResult FindMemoryTypeIndex(
uint32_t memoryTypeBits,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
VkFlags bufImgUsage, // VkBufferCreateInfo::usage or VkImageCreateInfo::usage. UINT32_MAX if unknown.
uint32_t* pMemoryTypeIndex) const;
// Main allocation function.
VkResult AllocateMemory(
const VkMemoryRequirements& vkMemReq,
bool requiresDedicatedAllocation,
bool prefersDedicatedAllocation,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage, // UINT32_MAX if unknown.
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
size_t allocationCount,
VmaAllocation* pAllocations);
// Main deallocation function.
void FreeMemory(
size_t allocationCount,
const VmaAllocation* pAllocations);
void CalculateStatistics(VmaTotalStatistics* pStats);
void GetHeapBudgets(
VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount);
#if VMA_STATS_STRING_ENABLED
void PrintDetailedMap(class VmaJsonWriter& json);
#endif
void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
void DestroyPool(VmaPool pool);
void GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats);
void CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats);
void SetCurrentFrameIndex(uint32_t frameIndex);
uint32_t GetCurrentFrameIndex() const { return m_CurrentFrameIndex.load(); }
VkResult CheckPoolCorruption(VmaPool hPool);
VkResult CheckCorruption(uint32_t memoryTypeBits);
// Call to Vulkan function vkAllocateMemory with accompanying bookkeeping.
VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
// Call to Vulkan function vkFreeMemory with accompanying bookkeeping.
void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
// Call to Vulkan function vkBindBufferMemory or vkBindBufferMemory2KHR.
VkResult BindVulkanBuffer(
VkDeviceMemory memory,
VkDeviceSize memoryOffset,
VkBuffer buffer,
const void* pNext);
// Call to Vulkan function vkBindImageMemory or vkBindImageMemory2KHR.
VkResult BindVulkanImage(
VkDeviceMemory memory,
VkDeviceSize memoryOffset,
VkImage image,
const void* pNext);
VkResult Map(VmaAllocation hAllocation, void** ppData);
void Unmap(VmaAllocation hAllocation);
VkResult BindBufferMemory(
VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkBuffer hBuffer,
const void* pNext);
VkResult BindImageMemory(
VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkImage hImage,
const void* pNext);
VkResult FlushOrInvalidateAllocation(
VmaAllocation hAllocation,
VkDeviceSize offset, VkDeviceSize size,
VMA_CACHE_OPERATION op);
VkResult FlushOrInvalidateAllocations(
uint32_t allocationCount,
const VmaAllocation* allocations,
const VkDeviceSize* offsets, const VkDeviceSize* sizes,
VMA_CACHE_OPERATION op);
void FillAllocation(const VmaAllocation hAllocation, uint8_t pattern);
/*
Returns bit mask of memory types that can support defragmentation on GPU as
they support creation of required buffer for copy operations.
*/
uint32_t GetGpuDefragmentationMemoryTypeBits();
#if VMA_EXTERNAL_MEMORY
VkExternalMemoryHandleTypeFlagsKHR GetExternalMemoryHandleTypeFlags(uint32_t memTypeIndex) const
{
return m_TypeExternalMemoryHandleTypes[memTypeIndex];
}
#endif // #if VMA_EXTERNAL_MEMORY
private:
VkDeviceSize m_PreferredLargeHeapBlockSize;
VkPhysicalDevice m_PhysicalDevice;
VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
VMA_ATOMIC_UINT32 m_GpuDefragmentationMemoryTypeBits; // UINT32_MAX means uninitialized.
#if VMA_EXTERNAL_MEMORY
VkExternalMemoryHandleTypeFlagsKHR m_TypeExternalMemoryHandleTypes[VK_MAX_MEMORY_TYPES];
#endif // #if VMA_EXTERNAL_MEMORY
VMA_RW_MUTEX m_PoolsMutex;
typedef VmaIntrusiveLinkedList<VmaPoolListItemTraits> PoolList;
// Protected by m_PoolsMutex.
PoolList m_Pools;
uint32_t m_NextPoolId;
VmaVulkanFunctions m_VulkanFunctions;
// Global bit mask AND-ed with any memoryTypeBits to disallow certain memory types.
uint32_t m_GlobalMemoryTypeBits;
void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
#if VMA_STATIC_VULKAN_FUNCTIONS == 1
void ImportVulkanFunctions_Static();
#endif
void ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions);
#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
void ImportVulkanFunctions_Dynamic();
#endif
void ValidateVulkanFunctions();
VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
VkResult AllocateMemoryOfType(
VmaPool pool,
VkDeviceSize size,
VkDeviceSize alignment,
bool dedicatedPreferred,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage,
const VmaAllocationCreateInfo& createInfo,
uint32_t memTypeIndex,
VmaSuballocationType suballocType,
VmaDedicatedAllocationList& dedicatedAllocations,
VmaBlockVector& blockVector,
size_t allocationCount,
VmaAllocation* pAllocations);
// Helper function only to be used inside AllocateDedicatedMemory.
VkResult AllocateDedicatedMemoryPage(
VmaPool pool,
VkDeviceSize size,
VmaSuballocationType suballocType,
uint32_t memTypeIndex,
const VkMemoryAllocateInfo& allocInfo,
bool map,
bool isUserDataString,
bool isMappingAllowed,
void* pUserData,
VmaAllocation* pAllocation);
// Allocates and registers new VkDeviceMemory specifically for dedicated allocations.
VkResult AllocateDedicatedMemory(
VmaPool pool,
VkDeviceSize size,
VmaSuballocationType suballocType,
VmaDedicatedAllocationList& dedicatedAllocations,
uint32_t memTypeIndex,
bool map,
bool isUserDataString,
bool isMappingAllowed,
bool canAliasMemory,
void* pUserData,
float priority,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage,
size_t allocationCount,
VmaAllocation* pAllocations,
const void* pNextChain = nullptr);
void FreeDedicatedMemory(const VmaAllocation allocation);
VkResult CalcMemTypeParams(
VmaAllocationCreateInfo& outCreateInfo,
uint32_t memTypeIndex,
VkDeviceSize size,
size_t allocationCount);
VkResult CalcAllocationParams(
VmaAllocationCreateInfo& outCreateInfo,
bool dedicatedRequired,
bool dedicatedPreferred);
/*
Calculates and returns bit mask of memory types that can support defragmentation
on GPU as they support creation of required buffer for copy operations.
*/
uint32_t CalculateGpuDefragmentationMemoryTypeBits() const;
uint32_t CalculateGlobalMemoryTypeBits() const;
bool GetFlushOrInvalidateRange(
VmaAllocation allocation,
VkDeviceSize offset, VkDeviceSize size,
VkMappedMemoryRange& outRange) const;
#if VMA_MEMORY_BUDGET
void UpdateVulkanBudget();
#endif // #if VMA_MEMORY_BUDGET
};
#ifndef _VMA_MEMORY_FUNCTIONS
static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
{
return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
}
static void VmaFree(VmaAllocator hAllocator, void* ptr)
{
VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
}
template<typename T>
static T* VmaAllocate(VmaAllocator hAllocator)
{
return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
}
template<typename T>
static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
{
return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
}
template<typename T>
static void vma_delete(VmaAllocator hAllocator, T* ptr)
{
if(ptr != VMA_NULL)
{
ptr->~T();
VmaFree(hAllocator, ptr);
}
}
template<typename T>
static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
{
if(ptr != VMA_NULL)
{
for(size_t i = count; i--; )
ptr[i].~T();
VmaFree(hAllocator, ptr);
}
}
#endif // _VMA_MEMORY_FUNCTIONS
#ifndef _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator)
: m_pMetadata(VMA_NULL),
m_MemoryTypeIndex(UINT32_MAX),
m_Id(0),
m_hMemory(VK_NULL_HANDLE),
m_MapCount(0),
m_pMappedData(VMA_NULL) {}
VmaDeviceMemoryBlock::~VmaDeviceMemoryBlock()
{
VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
}
void VmaDeviceMemoryBlock::Init(
VmaAllocator hAllocator,
VmaPool hParentPool,
uint32_t newMemoryTypeIndex,
VkDeviceMemory newMemory,
VkDeviceSize newSize,
uint32_t id,
uint32_t algorithm,
VkDeviceSize bufferImageGranularity)
{
VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
m_hParentPool = hParentPool;
m_MemoryTypeIndex = newMemoryTypeIndex;
m_Id = id;
m_hMemory = newMemory;
switch (algorithm)
{
case 0:
m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
bufferImageGranularity, false); // isVirtual
break;
case VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT:
m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator->GetAllocationCallbacks(),
bufferImageGranularity, false); // isVirtual
break;
default:
VMA_ASSERT(0);
m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
bufferImageGranularity, false); // isVirtual
}
m_pMetadata->Init(newSize);
}
void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
{
// Define macro VMA_DEBUG_LOG_FORMAT to receive the list of the unfreed allocations
if (!m_pMetadata->IsEmpty())
m_pMetadata->DebugLogAllAllocations();
// This is the most important assert in the entire library.
// Hitting it means you have some memory leak - unreleased VmaAllocation objects.
VMA_ASSERT(m_pMetadata->IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_pMetadata->GetSize(), m_hMemory);
m_hMemory = VK_NULL_HANDLE;
vma_delete(allocator, m_pMetadata);
m_pMetadata = VMA_NULL;
}
void VmaDeviceMemoryBlock::PostAlloc(VmaAllocator hAllocator)
{
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
m_MappingHysteresis.PostAlloc();
}
void VmaDeviceMemoryBlock::PostFree(VmaAllocator hAllocator)
{
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
if(m_MappingHysteresis.PostFree())
{
VMA_ASSERT(m_MappingHysteresis.GetExtraMapping() == 0);
if (m_MapCount == 0)
{
m_pMappedData = VMA_NULL;
(*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
}
}
}
bool VmaDeviceMemoryBlock::Validate() const
{
VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
(m_pMetadata->GetSize() != 0));
return m_pMetadata->Validate();
}
VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
{
void* pData = nullptr;
VkResult res = Map(hAllocator, 1, &pData);
if (res != VK_SUCCESS)
{
return res;
}
res = m_pMetadata->CheckCorruption(pData);
Unmap(hAllocator, 1);
return res;
}
VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
{
if (count == 0)
{
return VK_SUCCESS;
}
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
const uint32_t oldTotalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
m_MappingHysteresis.PostMap();
if (oldTotalMapCount != 0)
{
m_MapCount += count;
VMA_ASSERT(m_pMappedData != VMA_NULL);
if (ppData != VMA_NULL)
{
*ppData = m_pMappedData;
}
return VK_SUCCESS;
}
else
{
VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
hAllocator->m_hDevice,
m_hMemory,
0, // offset
VK_WHOLE_SIZE,
0, // flags
&m_pMappedData);
if (result == VK_SUCCESS)
{
if (ppData != VMA_NULL)
{
*ppData = m_pMappedData;
}
m_MapCount = count;
}
return result;
}
}
void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
{
if (count == 0)
{
return;
}
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
if (m_MapCount >= count)
{
m_MapCount -= count;
const uint32_t totalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
if (totalMapCount == 0)
{
m_pMappedData = VMA_NULL;
(*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
}
m_MappingHysteresis.PostUnmap();
}
else
{
VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
}
}
VkResult VmaDeviceMemoryBlock::WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
{
VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
void* pData;
VkResult res = Map(hAllocator, 1, &pData);
if (res != VK_SUCCESS)
{
return res;
}
VmaWriteMagicValue(pData, allocOffset + allocSize);
Unmap(hAllocator, 1);
return VK_SUCCESS;
}
VkResult VmaDeviceMemoryBlock::ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
{
VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
void* pData;
VkResult res = Map(hAllocator, 1, &pData);
if (res != VK_SUCCESS)
{
return res;
}
if (!VmaValidateMagicValue(pData, allocOffset + allocSize))
{
VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
}
Unmap(hAllocator, 1);
return VK_SUCCESS;
}
VkResult VmaDeviceMemoryBlock::BindBufferMemory(
const VmaAllocator hAllocator,
const VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkBuffer hBuffer,
const void* pNext)
{
VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
hAllocation->GetBlock() == this);
VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
"Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
// This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
return hAllocator->BindVulkanBuffer(m_hMemory, memoryOffset, hBuffer, pNext);
}
VkResult VmaDeviceMemoryBlock::BindImageMemory(
const VmaAllocator hAllocator,
const VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkImage hImage,
const void* pNext)
{
VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
hAllocation->GetBlock() == this);
VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
"Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
// This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
return hAllocator->BindVulkanImage(m_hMemory, memoryOffset, hImage, pNext);
}
#endif // _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
#ifndef _VMA_ALLOCATION_T_FUNCTIONS
VmaAllocation_T::VmaAllocation_T(bool mappingAllowed)
: m_Alignment{ 1 },
m_Size{ 0 },
m_pUserData{ VMA_NULL },
m_pName{ VMA_NULL },
m_MemoryTypeIndex{ 0 },
m_Type{ (uint8_t)ALLOCATION_TYPE_NONE },
m_SuballocationType{ (uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN },
m_MapCount{ 0 },
m_Flags{ 0 }
{
if(mappingAllowed)
m_Flags |= (uint8_t)FLAG_MAPPING_ALLOWED;
#if VMA_STATS_STRING_ENABLED
m_BufferImageUsage = 0;
#endif
}
VmaAllocation_T::~VmaAllocation_T()
{
VMA_ASSERT(m_MapCount == 0 && "Allocation was not unmapped before destruction.");
// Check if owned string was freed.
VMA_ASSERT(m_pName == VMA_NULL);
}
void VmaAllocation_T::InitBlockAllocation(
VmaDeviceMemoryBlock* block,
VmaAllocHandle allocHandle,
VkDeviceSize alignment,
VkDeviceSize size,
uint32_t memoryTypeIndex,
VmaSuballocationType suballocationType,
bool mapped)
{
VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
VMA_ASSERT(block != VMA_NULL);
m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
m_Alignment = alignment;
m_Size = size;
m_MemoryTypeIndex = memoryTypeIndex;
if(mapped)
{
VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
}
m_SuballocationType = (uint8_t)suballocationType;
m_BlockAllocation.m_Block = block;
m_BlockAllocation.m_AllocHandle = allocHandle;
}
void VmaAllocation_T::InitDedicatedAllocation(
VmaPool hParentPool,
uint32_t memoryTypeIndex,
VkDeviceMemory hMemory,
VmaSuballocationType suballocationType,
void* pMappedData,
VkDeviceSize size)
{
VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
VMA_ASSERT(hMemory != VK_NULL_HANDLE);
m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
m_Alignment = 0;
m_Size = size;
m_MemoryTypeIndex = memoryTypeIndex;
m_SuballocationType = (uint8_t)suballocationType;
if(pMappedData != VMA_NULL)
{
VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
}
m_DedicatedAllocation.m_hParentPool = hParentPool;
m_DedicatedAllocation.m_hMemory = hMemory;
m_DedicatedAllocation.m_pMappedData = pMappedData;
m_DedicatedAllocation.m_Prev = VMA_NULL;
m_DedicatedAllocation.m_Next = VMA_NULL;
}
void VmaAllocation_T::SetName(VmaAllocator hAllocator, const char* pName)
{
VMA_ASSERT(pName == VMA_NULL || pName != m_pName);
FreeName(hAllocator);
if (pName != VMA_NULL)
m_pName = VmaCreateStringCopy(hAllocator->GetAllocationCallbacks(), pName);
}
uint8_t VmaAllocation_T::SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation)
{
VMA_ASSERT(allocation != VMA_NULL);
VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
VMA_ASSERT(allocation->m_Type == ALLOCATION_TYPE_BLOCK);
if (m_MapCount != 0)
m_BlockAllocation.m_Block->Unmap(hAllocator, m_MapCount);
m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, allocation);
VMA_SWAP(m_BlockAllocation, allocation->m_BlockAllocation);
m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, this);
#if VMA_STATS_STRING_ENABLED
VMA_SWAP(m_BufferImageUsage, allocation->m_BufferImageUsage);
#endif
return m_MapCount;
}
VmaAllocHandle VmaAllocation_T::GetAllocHandle() const
{
switch (m_Type)
{
case ALLOCATION_TYPE_BLOCK:
return m_BlockAllocation.m_AllocHandle;
case ALLOCATION_TYPE_DEDICATED:
return VK_NULL_HANDLE;
default:
VMA_ASSERT(0);
return VK_NULL_HANDLE;
}
}
VkDeviceSize VmaAllocation_T::GetOffset() const
{
switch (m_Type)
{
case ALLOCATION_TYPE_BLOCK:
return m_BlockAllocation.m_Block->m_pMetadata->GetAllocationOffset(m_BlockAllocation.m_AllocHandle);
case ALLOCATION_TYPE_DEDICATED:
return 0;
default:
VMA_ASSERT(0);
return 0;
}
}
VmaPool VmaAllocation_T::GetParentPool() const
{
switch (m_Type)
{
case ALLOCATION_TYPE_BLOCK:
return m_BlockAllocation.m_Block->GetParentPool();
case ALLOCATION_TYPE_DEDICATED:
return m_DedicatedAllocation.m_hParentPool;
default:
VMA_ASSERT(0);
return VK_NULL_HANDLE;
}
}
VkDeviceMemory VmaAllocation_T::GetMemory() const
{
switch (m_Type)
{
case ALLOCATION_TYPE_BLOCK:
return m_BlockAllocation.m_Block->GetDeviceMemory();
case ALLOCATION_TYPE_DEDICATED:
return m_DedicatedAllocation.m_hMemory;
default:
VMA_ASSERT(0);
return VK_NULL_HANDLE;
}
}
void* VmaAllocation_T::GetMappedData() const
{
switch (m_Type)
{
case ALLOCATION_TYPE_BLOCK:
if (m_MapCount != 0 || IsPersistentMap())
{
void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
VMA_ASSERT(pBlockData != VMA_NULL);
return (char*)pBlockData + GetOffset();
}
else
{
return VMA_NULL;
}
break;
case ALLOCATION_TYPE_DEDICATED:
VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0 || IsPersistentMap()));
return m_DedicatedAllocation.m_pMappedData;
default:
VMA_ASSERT(0);
return VMA_NULL;
}
}
void VmaAllocation_T::BlockAllocMap()
{
VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
if (m_MapCount < 0xFF)
{
++m_MapCount;
}
else
{
VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
}
}
void VmaAllocation_T::BlockAllocUnmap()
{
VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
if (m_MapCount > 0)
{
--m_MapCount;
}
else
{
VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
}
}
VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
{
VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
if (m_MapCount != 0 || IsPersistentMap())
{
if (m_MapCount < 0xFF)
{
VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
*ppData = m_DedicatedAllocation.m_pMappedData;
++m_MapCount;
return VK_SUCCESS;
}
else
{
VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
return VK_ERROR_MEMORY_MAP_FAILED;
}
}
else
{
VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
hAllocator->m_hDevice,
m_DedicatedAllocation.m_hMemory,
0, // offset
VK_WHOLE_SIZE,
0, // flags
ppData);
if (result == VK_SUCCESS)
{
m_DedicatedAllocation.m_pMappedData = *ppData;
m_MapCount = 1;
}
return result;
}
}
void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
{
VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
if (m_MapCount > 0)
{
--m_MapCount;
if (m_MapCount == 0 && !IsPersistentMap())
{
m_DedicatedAllocation.m_pMappedData = VMA_NULL;
(*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
hAllocator->m_hDevice,
m_DedicatedAllocation.m_hMemory);
}
}
else
{
VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
}
}
#if VMA_STATS_STRING_ENABLED
void VmaAllocation_T::InitBufferImageUsage(uint32_t bufferImageUsage)
{
VMA_ASSERT(m_BufferImageUsage == 0);
m_BufferImageUsage = bufferImageUsage;
}
void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
{
json.WriteString("Type");
json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
json.WriteString("Size");
json.WriteNumber(m_Size);
json.WriteString("Usage");
json.WriteNumber(m_BufferImageUsage);
if (m_pUserData != VMA_NULL)
{
json.WriteString("CustomData");
json.BeginString();
json.ContinueString_Pointer(m_pUserData);
json.EndString();
}
if (m_pName != VMA_NULL)
{
json.WriteString("Name");
json.WriteString(m_pName);
}
}
#endif // VMA_STATS_STRING_ENABLED
void VmaAllocation_T::FreeName(VmaAllocator hAllocator)
{
if(m_pName)
{
VmaFreeString(hAllocator->GetAllocationCallbacks(), m_pName);
m_pName = VMA_NULL;
}
}
#endif // _VMA_ALLOCATION_T_FUNCTIONS
#ifndef _VMA_BLOCK_VECTOR_FUNCTIONS
VmaBlockVector::VmaBlockVector(
VmaAllocator hAllocator,
VmaPool hParentPool,
uint32_t memoryTypeIndex,
VkDeviceSize preferredBlockSize,
size_t minBlockCount,
size_t maxBlockCount,
VkDeviceSize bufferImageGranularity,
bool explicitBlockSize,
uint32_t algorithm,
float priority,
VkDeviceSize minAllocationAlignment,
void* pMemoryAllocateNext)
: m_hAllocator(hAllocator),
m_hParentPool(hParentPool),
m_MemoryTypeIndex(memoryTypeIndex),
m_PreferredBlockSize(preferredBlockSize),
m_MinBlockCount(minBlockCount),
m_MaxBlockCount(maxBlockCount),
m_BufferImageGranularity(bufferImageGranularity),
m_ExplicitBlockSize(explicitBlockSize),
m_Algorithm(algorithm),
m_Priority(priority),
m_MinAllocationAlignment(minAllocationAlignment),
m_pMemoryAllocateNext(pMemoryAllocateNext),
m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
m_NextBlockId(0) {}
VmaBlockVector::~VmaBlockVector()
{
for (size_t i = m_Blocks.size(); i--; )
{
m_Blocks[i]->Destroy(m_hAllocator);
vma_delete(m_hAllocator, m_Blocks[i]);
}
}
VkResult VmaBlockVector::CreateMinBlocks()
{
for (size_t i = 0; i < m_MinBlockCount; ++i)
{
VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
if (res != VK_SUCCESS)
{
return res;
}
}
return VK_SUCCESS;
}
void VmaBlockVector::AddStatistics(VmaStatistics& inoutStats)
{
VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
const size_t blockCount = m_Blocks.size();
for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
{
const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
VMA_ASSERT(pBlock);
VMA_HEAVY_ASSERT(pBlock->Validate());
pBlock->m_pMetadata->AddStatistics(inoutStats);
}
}
void VmaBlockVector::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
{
VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
const size_t blockCount = m_Blocks.size();
for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
{
const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
VMA_ASSERT(pBlock);
VMA_HEAVY_ASSERT(pBlock->Validate());
pBlock->m_pMetadata->AddDetailedStatistics(inoutStats);
}
}
bool VmaBlockVector::IsEmpty()
{
VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
return m_Blocks.empty();
}
bool VmaBlockVector::IsCorruptionDetectionEnabled() const
{
const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
(VMA_DEBUG_MARGIN > 0) &&
(m_Algorithm == 0 || m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT) &&
(m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
}
VkResult VmaBlockVector::Allocate(
VkDeviceSize size,
VkDeviceSize alignment,
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
size_t allocationCount,
VmaAllocation* pAllocations)
{
size_t allocIndex;
VkResult res = VK_SUCCESS;
alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
if (IsCorruptionDetectionEnabled())
{
size = VmaAlignUp<VkDeviceSize>(size, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
alignment = VmaAlignUp<VkDeviceSize>(alignment, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
}
{
VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
{
res = AllocatePage(
size,
alignment,
createInfo,
suballocType,
pAllocations + allocIndex);
if (res != VK_SUCCESS)
{
break;
}
}
}
if (res != VK_SUCCESS)
{
// Free all already created allocations.
while (allocIndex--)
Free(pAllocations[allocIndex]);
memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
}
return res;
}
VkResult VmaBlockVector::AllocatePage(
VkDeviceSize size,
VkDeviceSize alignment,
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
VmaAllocation* pAllocation)
{
const bool isUpperAddress = (createInfo.flags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
VkDeviceSize freeMemory;
{
const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
VmaBudget heapBudget = {};
m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
freeMemory = (heapBudget.usage < heapBudget.budget) ? (heapBudget.budget - heapBudget.usage) : 0;
}
const bool canFallbackToDedicated = !HasExplicitBlockSize() &&
(createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0;
const bool canCreateNewBlock =
((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
(m_Blocks.size() < m_MaxBlockCount) &&
(freeMemory >= size || !canFallbackToDedicated);
uint32_t strategy = createInfo.flags & VMA_ALLOCATION_CREATE_STRATEGY_MASK;
// Upper address can only be used with linear allocator and within single memory block.
if (isUpperAddress &&
(m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT || m_MaxBlockCount > 1))
{
return VK_ERROR_FEATURE_NOT_PRESENT;
}
// Early reject: requested allocation size is larger that maximum block size for this block vector.
if (size + VMA_DEBUG_MARGIN > m_PreferredBlockSize)
{
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
// 1. Search existing allocations. Try to allocate.
if (m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
{
// Use only last block.
if (!m_Blocks.empty())
{
VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks.back();
VMA_ASSERT(pCurrBlock);
VkResult res = AllocateFromBlock(
pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
if (res == VK_SUCCESS)
{
VMA_DEBUG_LOG_FORMAT(" Returned from last block #%u", pCurrBlock->GetId());
IncrementallySortBlocks();
return VK_SUCCESS;
}
}
}
else
{
if (strategy != VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT) // MIN_MEMORY or default
{
const bool isHostVisible =
(m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
if(isHostVisible)
{
const bool isMappingAllowed = (createInfo.flags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
/*
For non-mappable allocations, check blocks that are not mapped first.
For mappable allocations, check blocks that are already mapped first.
This way, having many blocks, we will separate mappable and non-mappable allocations,
hopefully limiting the number of blocks that are mapped, which will help tools like RenderDoc.
*/
for(size_t mappingI = 0; mappingI < 2; ++mappingI)
{
// Forward order in m_Blocks - prefer blocks with smallest amount of free space.
for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
{
VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
VMA_ASSERT(pCurrBlock);
const bool isBlockMapped = pCurrBlock->GetMappedData() != VMA_NULL;
if((mappingI == 0) == (isMappingAllowed == isBlockMapped))
{
VkResult res = AllocateFromBlock(
pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
if (res == VK_SUCCESS)
{
VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%u", pCurrBlock->GetId());
IncrementallySortBlocks();
return VK_SUCCESS;
}
}
}
}
}
else
{
// Forward order in m_Blocks - prefer blocks with smallest amount of free space.
for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
{
VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
VMA_ASSERT(pCurrBlock);
VkResult res = AllocateFromBlock(
pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
if (res == VK_SUCCESS)
{
VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%u", pCurrBlock->GetId());
IncrementallySortBlocks();
return VK_SUCCESS;
}
}
}
}
else // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
{
// Backward order in m_Blocks - prefer blocks with largest amount of free space.
for (size_t blockIndex = m_Blocks.size(); blockIndex--; )
{
VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
VMA_ASSERT(pCurrBlock);
VkResult res = AllocateFromBlock(pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
if (res == VK_SUCCESS)
{
VMA_DEBUG_LOG_FORMAT(" Returned from existing block #%u", pCurrBlock->GetId());
IncrementallySortBlocks();
return VK_SUCCESS;
}
}
}
}
// 2. Try to create new block.
if (canCreateNewBlock)
{
// Calculate optimal size for new block.
VkDeviceSize newBlockSize = m_PreferredBlockSize;
uint32_t newBlockSizeShift = 0;
const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
if (!m_ExplicitBlockSize)
{
// Allocate 1/8, 1/4, 1/2 as first blocks.
const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
for (uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
{
const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
if (smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
{
newBlockSize = smallerNewBlockSize;
++newBlockSizeShift;
}
else
{
break;
}
}
}
size_t newBlockIndex = 0;
VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
// Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
if (!m_ExplicitBlockSize)
{
while (res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
{
const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
if (smallerNewBlockSize >= size)
{
newBlockSize = smallerNewBlockSize;
++newBlockSizeShift;
res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
else
{
break;
}
}
}
if (res == VK_SUCCESS)
{
VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
res = AllocateFromBlock(
pBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
if (res == VK_SUCCESS)
{
VMA_DEBUG_LOG_FORMAT(" Created new block #%u Size=%llu", pBlock->GetId(), newBlockSize);
IncrementallySortBlocks();
return VK_SUCCESS;
}
else
{
// Allocation from new block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
}
}
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
void VmaBlockVector::Free(const VmaAllocation hAllocation)
{
VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
bool budgetExceeded = false;
{
const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
VmaBudget heapBudget = {};
m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
budgetExceeded = heapBudget.usage >= heapBudget.budget;
}
// Scope for lock.
{
VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
if (IsCorruptionDetectionEnabled())
{
VkResult res = pBlock->ValidateMagicValueAfterAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
}
if (hAllocation->IsPersistentMap())
{
pBlock->Unmap(m_hAllocator, 1);
}
const bool hadEmptyBlockBeforeFree = HasEmptyBlock();
pBlock->m_pMetadata->Free(hAllocation->GetAllocHandle());
pBlock->PostFree(m_hAllocator);
VMA_HEAVY_ASSERT(pBlock->Validate());
VMA_DEBUG_LOG_FORMAT(" Freed from MemoryTypeIndex=%u", m_MemoryTypeIndex);
const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
// pBlock became empty after this deallocation.
if (pBlock->m_pMetadata->IsEmpty())
{
// Already had empty block. We don't want to have two, so delete this one.
if ((hadEmptyBlockBeforeFree || budgetExceeded) && canDeleteBlock)
{
pBlockToDelete = pBlock;
Remove(pBlock);
}
// else: We now have one empty block - leave it. A hysteresis to avoid allocating whole block back and forth.
}
// pBlock didn't become empty, but we have another empty block - find and free that one.
// (This is optional, heuristics.)
else if (hadEmptyBlockBeforeFree && canDeleteBlock)
{
VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
if (pLastBlock->m_pMetadata->IsEmpty())
{
pBlockToDelete = pLastBlock;
m_Blocks.pop_back();
}
}
IncrementallySortBlocks();
}
// Destruction of a free block. Deferred until this point, outside of mutex
// lock, for performance reason.
if (pBlockToDelete != VMA_NULL)
{
VMA_DEBUG_LOG_FORMAT(" Deleted empty block #%u", pBlockToDelete->GetId());
pBlockToDelete->Destroy(m_hAllocator);
vma_delete(m_hAllocator, pBlockToDelete);
}
m_hAllocator->m_Budget.RemoveAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), hAllocation->GetSize());
m_hAllocator->m_AllocationObjectAllocator.Free(hAllocation);
}
VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
{
VkDeviceSize result = 0;
for (size_t i = m_Blocks.size(); i--; )
{
result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
if (result >= m_PreferredBlockSize)
{
break;
}
}
return result;
}
void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
{
for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
{
if (m_Blocks[blockIndex] == pBlock)
{
VmaVectorRemove(m_Blocks, blockIndex);
return;
}
}
VMA_ASSERT(0);
}
void VmaBlockVector::IncrementallySortBlocks()
{
if (!m_IncrementalSort)
return;
if (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
{
// Bubble sort only until first swap.
for (size_t i = 1; i < m_Blocks.size(); ++i)
{
if (m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
{
VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
return;
}
}
}
}
void VmaBlockVector::SortByFreeSize()
{
VMA_SORT(m_Blocks.begin(), m_Blocks.end(),
[](VmaDeviceMemoryBlock* b1, VmaDeviceMemoryBlock* b2) -> bool
{
return b1->m_pMetadata->GetSumFreeSize() < b2->m_pMetadata->GetSumFreeSize();
});
}
VkResult VmaBlockVector::AllocateFromBlock(
VmaDeviceMemoryBlock* pBlock,
VkDeviceSize size,
VkDeviceSize alignment,
VmaAllocationCreateFlags allocFlags,
void* pUserData,
VmaSuballocationType suballocType,
uint32_t strategy,
VmaAllocation* pAllocation)
{
const bool isUpperAddress = (allocFlags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
VmaAllocationRequest currRequest = {};
if (pBlock->m_pMetadata->CreateAllocationRequest(
size,
alignment,
isUpperAddress,
suballocType,
strategy,
&currRequest))
{
return CommitAllocationRequest(currRequest, pBlock, alignment, allocFlags, pUserData, suballocType, pAllocation);
}
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
VkResult VmaBlockVector::CommitAllocationRequest(
VmaAllocationRequest& allocRequest,
VmaDeviceMemoryBlock* pBlock,
VkDeviceSize alignment,
VmaAllocationCreateFlags allocFlags,
void* pUserData,
VmaSuballocationType suballocType,
VmaAllocation* pAllocation)
{
const bool mapped = (allocFlags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
const bool isUserDataString = (allocFlags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
const bool isMappingAllowed = (allocFlags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
pBlock->PostAlloc(m_hAllocator);
// Allocate from pCurrBlock.
if (mapped)
{
VkResult res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
if (res != VK_SUCCESS)
{
return res;
}
}
*pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(isMappingAllowed);
pBlock->m_pMetadata->Alloc(allocRequest, suballocType, *pAllocation);
(*pAllocation)->InitBlockAllocation(
pBlock,
allocRequest.allocHandle,
alignment,
allocRequest.size, // Not size, as actual allocation size may be larger than requested!
m_MemoryTypeIndex,
suballocType,
mapped);
VMA_HEAVY_ASSERT(pBlock->Validate());
if (isUserDataString)
(*pAllocation)->SetName(m_hAllocator, (const char*)pUserData);
else
(*pAllocation)->SetUserData(m_hAllocator, pUserData);
m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), allocRequest.size);
if (VMA_DEBUG_INITIALIZE_ALLOCATIONS)
{
m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
}
if (IsCorruptionDetectionEnabled())
{
VkResult res = pBlock->WriteMagicValueAfterAllocation(m_hAllocator, (*pAllocation)->GetOffset(), allocRequest.size);
VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
}
return VK_SUCCESS;
}
VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
{
VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
allocInfo.pNext = m_pMemoryAllocateNext;
allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
allocInfo.allocationSize = blockSize;
#if VMA_BUFFER_DEVICE_ADDRESS
// Every standalone block can potentially contain a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT - always enable the feature.
VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
if (m_hAllocator->m_UseKhrBufferDeviceAddress)
{
allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
}
#endif // VMA_BUFFER_DEVICE_ADDRESS
#if VMA_MEMORY_PRIORITY
VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
if (m_hAllocator->m_UseExtMemoryPriority)
{
VMA_ASSERT(m_Priority >= 0.f && m_Priority <= 1.f);
priorityInfo.priority = m_Priority;
VmaPnextChainPushFront(&allocInfo, &priorityInfo);
}
#endif // VMA_MEMORY_PRIORITY
#if VMA_EXTERNAL_MEMORY
// Attach VkExportMemoryAllocateInfoKHR if necessary.
VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(m_MemoryTypeIndex);
if (exportMemoryAllocInfo.handleTypes != 0)
{
VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
}
#endif // VMA_EXTERNAL_MEMORY
VkDeviceMemory mem = VK_NULL_HANDLE;
VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
if (res < 0)
{
return res;
}
// New VkDeviceMemory successfully created.
// Create new Allocation for it.
VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
pBlock->Init(
m_hAllocator,
m_hParentPool,
m_MemoryTypeIndex,
mem,
allocInfo.allocationSize,
m_NextBlockId++,
m_Algorithm,
m_BufferImageGranularity);
m_Blocks.push_back(pBlock);
if (pNewBlockIndex != VMA_NULL)
{
*pNewBlockIndex = m_Blocks.size() - 1;
}
return VK_SUCCESS;
}
bool VmaBlockVector::HasEmptyBlock()
{
for (size_t index = 0, count = m_Blocks.size(); index < count; ++index)
{
VmaDeviceMemoryBlock* const pBlock = m_Blocks[index];
if (pBlock->m_pMetadata->IsEmpty())
{
return true;
}
}
return false;
}
#if VMA_STATS_STRING_ENABLED
void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
{
VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
json.BeginObject();
for (size_t i = 0; i < m_Blocks.size(); ++i)
{
json.BeginString();
json.ContinueString(m_Blocks[i]->GetId());
json.EndString();
json.BeginObject();
json.WriteString("MapRefCount");
json.WriteNumber(m_Blocks[i]->GetMapRefCount());
m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
json.EndObject();
}
json.EndObject();
}
#endif // VMA_STATS_STRING_ENABLED
VkResult VmaBlockVector::CheckCorruption()
{
if (!IsCorruptionDetectionEnabled())
{
return VK_ERROR_FEATURE_NOT_PRESENT;
}
VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
{
VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
VMA_ASSERT(pBlock);
VkResult res = pBlock->CheckCorruption(m_hAllocator);
if (res != VK_SUCCESS)
{
return res;
}
}
return VK_SUCCESS;
}
#endif // _VMA_BLOCK_VECTOR_FUNCTIONS
#ifndef _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
VmaDefragmentationContext_T::VmaDefragmentationContext_T(
VmaAllocator hAllocator,
const VmaDefragmentationInfo& info)
: m_MaxPassBytes(info.maxBytesPerPass == 0 ? VK_WHOLE_SIZE : info.maxBytesPerPass),
m_MaxPassAllocations(info.maxAllocationsPerPass == 0 ? UINT32_MAX : info.maxAllocationsPerPass),
m_BreakCallback(info.pfnBreakCallback),
m_BreakCallbackUserData(info.pBreakCallbackUserData),
m_MoveAllocator(hAllocator->GetAllocationCallbacks()),
m_Moves(m_MoveAllocator)
{
m_Algorithm = info.flags & VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK;
if (info.pool != VMA_NULL)
{
m_BlockVectorCount = 1;
m_PoolBlockVector = &info.pool->m_BlockVector;
m_pBlockVectors = &m_PoolBlockVector;
m_PoolBlockVector->SetIncrementalSort(false);
m_PoolBlockVector->SortByFreeSize();
}
else
{
m_BlockVectorCount = hAllocator->GetMemoryTypeCount();
m_PoolBlockVector = VMA_NULL;
m_pBlockVectors = hAllocator->m_pBlockVectors;
for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
{
VmaBlockVector* vector = m_pBlockVectors[i];
if (vector != VMA_NULL)
{
vector->SetIncrementalSort(false);
vector->SortByFreeSize();
}
}
}
switch (m_Algorithm)
{
case 0: // Default algorithm
m_Algorithm = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT;
m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
break;
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
break;
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
if (hAllocator->GetBufferImageGranularity() > 1)
{
m_AlgorithmState = vma_new_array(hAllocator, StateExtensive, m_BlockVectorCount);
}
break;
}
}
VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
{
if (m_PoolBlockVector != VMA_NULL)
{
m_PoolBlockVector->SetIncrementalSort(true);
}
else
{
for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
{
VmaBlockVector* vector = m_pBlockVectors[i];
if (vector != VMA_NULL)
vector->SetIncrementalSort(true);
}
}
if (m_AlgorithmState)
{
switch (m_Algorithm)
{
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateBalanced*>(m_AlgorithmState), m_BlockVectorCount);
break;
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateExtensive*>(m_AlgorithmState), m_BlockVectorCount);
break;
default:
VMA_ASSERT(0);
}
}
}
VkResult VmaDefragmentationContext_T::DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo)
{
if (m_PoolBlockVector != VMA_NULL)
{
VmaMutexLockWrite lock(m_PoolBlockVector->GetMutex(), m_PoolBlockVector->GetAllocator()->m_UseMutex);
if (m_PoolBlockVector->GetBlockCount() > 1)
ComputeDefragmentation(*m_PoolBlockVector, 0);
else if (m_PoolBlockVector->GetBlockCount() == 1)
ReallocWithinBlock(*m_PoolBlockVector, m_PoolBlockVector->GetBlock(0));
}
else
{
for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
{
if (m_pBlockVectors[i] != VMA_NULL)
{
VmaMutexLockWrite lock(m_pBlockVectors[i]->GetMutex(), m_pBlockVectors[i]->GetAllocator()->m_UseMutex);
if (m_pBlockVectors[i]->GetBlockCount() > 1)
{
if (ComputeDefragmentation(*m_pBlockVectors[i], i))
break;
}
else if (m_pBlockVectors[i]->GetBlockCount() == 1)
{
if (ReallocWithinBlock(*m_pBlockVectors[i], m_pBlockVectors[i]->GetBlock(0)))
break;
}
}
}
}
moveInfo.moveCount = static_cast<uint32_t>(m_Moves.size());
if (moveInfo.moveCount > 0)
{
moveInfo.pMoves = m_Moves.data();
return VK_INCOMPLETE;
}
moveInfo.pMoves = VMA_NULL;
return VK_SUCCESS;
}
VkResult VmaDefragmentationContext_T::DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo)
{
VMA_ASSERT(moveInfo.moveCount > 0 ? moveInfo.pMoves != VMA_NULL : true);
VkResult result = VK_SUCCESS;
VmaStlAllocator<FragmentedBlock> blockAllocator(m_MoveAllocator.m_pCallbacks);
VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> immovableBlocks(blockAllocator);
VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> mappedBlocks(blockAllocator);
VmaAllocator allocator = VMA_NULL;
for (uint32_t i = 0; i < moveInfo.moveCount; ++i)
{
VmaDefragmentationMove& move = moveInfo.pMoves[i];
size_t prevCount = 0, currentCount = 0;
VkDeviceSize freedBlockSize = 0;
uint32_t vectorIndex;
VmaBlockVector* vector;
if (m_PoolBlockVector != VMA_NULL)
{
vectorIndex = 0;
vector = m_PoolBlockVector;
}
else
{
vectorIndex = move.srcAllocation->GetMemoryTypeIndex();
vector = m_pBlockVectors[vectorIndex];
VMA_ASSERT(vector != VMA_NULL);
}
switch (move.operation)
{
case VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY:
{
uint8_t mapCount = move.srcAllocation->SwapBlockAllocation(vector->m_hAllocator, move.dstTmpAllocation);
if (mapCount > 0)
{
allocator = vector->m_hAllocator;
VmaDeviceMemoryBlock* newMapBlock = move.srcAllocation->GetBlock();
bool notPresent = true;
for (FragmentedBlock& block : mappedBlocks)
{
if (block.block == newMapBlock)
{
notPresent = false;
block.data += mapCount;
break;
}
}
if (notPresent)
mappedBlocks.push_back({ mapCount, newMapBlock });
}
// Scope for locks, Free have it's own lock
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
prevCount = vector->GetBlockCount();
freedBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
}
vector->Free(move.dstTmpAllocation);
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
currentCount = vector->GetBlockCount();
}
result = VK_INCOMPLETE;
break;
}
case VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE:
{
m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
--m_PassStats.allocationsMoved;
vector->Free(move.dstTmpAllocation);
VmaDeviceMemoryBlock* newBlock = move.srcAllocation->GetBlock();
bool notPresent = true;
for (const FragmentedBlock& block : immovableBlocks)
{
if (block.block == newBlock)
{
notPresent = false;
break;
}
}
if (notPresent)
immovableBlocks.push_back({ vectorIndex, newBlock });
break;
}
case VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY:
{
m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
--m_PassStats.allocationsMoved;
// Scope for locks, Free have it's own lock
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
prevCount = vector->GetBlockCount();
freedBlockSize = move.srcAllocation->GetBlock()->m_pMetadata->GetSize();
}
vector->Free(move.srcAllocation);
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
currentCount = vector->GetBlockCount();
}
freedBlockSize *= prevCount - currentCount;
VkDeviceSize dstBlockSize;
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
dstBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
}
vector->Free(move.dstTmpAllocation);
{
VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
freedBlockSize += dstBlockSize * (currentCount - vector->GetBlockCount());
currentCount = vector->GetBlockCount();
}
result = VK_INCOMPLETE;
break;
}
default:
VMA_ASSERT(0);
}
if (prevCount > currentCount)
{
size_t freedBlocks = prevCount - currentCount;
m_PassStats.deviceMemoryBlocksFreed += static_cast<uint32_t>(freedBlocks);
m_PassStats.bytesFreed += freedBlockSize;
}
if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT &&
m_AlgorithmState != VMA_NULL)
{
// Avoid unnecessary tries to allocate when new free block is available
StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[vectorIndex];
if (state.firstFreeBlock != SIZE_MAX)
{
const size_t diff = prevCount - currentCount;
if (state.firstFreeBlock >= diff)
{
state.firstFreeBlock -= diff;
if (state.firstFreeBlock != 0)
state.firstFreeBlock -= vector->GetBlock(state.firstFreeBlock - 1)->m_pMetadata->IsEmpty();
}
else
state.firstFreeBlock = 0;
}
}
}
moveInfo.moveCount = 0;
moveInfo.pMoves = VMA_NULL;
m_Moves.clear();
// Update stats
m_GlobalStats.allocationsMoved += m_PassStats.allocationsMoved;
m_GlobalStats.bytesFreed += m_PassStats.bytesFreed;
m_GlobalStats.bytesMoved += m_PassStats.bytesMoved;
m_GlobalStats.deviceMemoryBlocksFreed += m_PassStats.deviceMemoryBlocksFreed;
m_PassStats = { 0 };
// Move blocks with immovable allocations according to algorithm
if (immovableBlocks.size() > 0)
{
do
{
if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT)
{
if (m_AlgorithmState != VMA_NULL)
{
bool swapped = false;
// Move to the start of free blocks range
for (const FragmentedBlock& block : immovableBlocks)
{
StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[block.data];
if (state.operation != StateExtensive::Operation::Cleanup)
{
VmaBlockVector* vector = m_pBlockVectors[block.data];
VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
for (size_t i = 0, count = vector->GetBlockCount() - m_ImmovableBlockCount; i < count; ++i)
{
if (vector->GetBlock(i) == block.block)
{
VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[vector->GetBlockCount() - ++m_ImmovableBlockCount]);
if (state.firstFreeBlock != SIZE_MAX)
{
if (i + 1 < state.firstFreeBlock)
{
if (state.firstFreeBlock > 1)
VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[--state.firstFreeBlock]);
else
--state.firstFreeBlock;
}
}
swapped = true;
break;
}
}
}
}
if (swapped)
result = VK_INCOMPLETE;
break;
}
}
// Move to the beginning
for (const FragmentedBlock& block : immovableBlocks)
{
VmaBlockVector* vector = m_pBlockVectors[block.data];
VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
for (size_t i = m_ImmovableBlockCount; i < vector->GetBlockCount(); ++i)
{
if (vector->GetBlock(i) == block.block)
{
VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[m_ImmovableBlockCount++]);
break;
}
}
}
} while (false);
}
// Bulk-map destination blocks
for (const FragmentedBlock& block : mappedBlocks)
{
VkResult res = block.block->Map(allocator, block.data, VMA_NULL);
VMA_ASSERT(res == VK_SUCCESS);
}
return result;
}
bool VmaDefragmentationContext_T::ComputeDefragmentation(VmaBlockVector& vector, size_t index)
{
switch (m_Algorithm)
{
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT:
return ComputeDefragmentation_Fast(vector);
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
return ComputeDefragmentation_Balanced(vector, index, true);
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT:
return ComputeDefragmentation_Full(vector);
case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
return ComputeDefragmentation_Extensive(vector, index);
default:
VMA_ASSERT(0);
return ComputeDefragmentation_Balanced(vector, index, true);
}
}
VmaDefragmentationContext_T::MoveAllocationData VmaDefragmentationContext_T::GetMoveData(
VmaAllocHandle handle, VmaBlockMetadata* metadata)
{
MoveAllocationData moveData;
moveData.move.srcAllocation = (VmaAllocation)metadata->GetAllocationUserData(handle);
moveData.size = moveData.move.srcAllocation->GetSize();
moveData.alignment = moveData.move.srcAllocation->GetAlignment();
moveData.type = moveData.move.srcAllocation->GetSuballocationType();
moveData.flags = 0;
if (moveData.move.srcAllocation->IsPersistentMap())
moveData.flags |= VMA_ALLOCATION_CREATE_MAPPED_BIT;
if (moveData.move.srcAllocation->IsMappingAllowed())
moveData.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
return moveData;
}
VmaDefragmentationContext_T::CounterStatus VmaDefragmentationContext_T::CheckCounters(VkDeviceSize bytes)
{
// Check custom criteria if exists
if (m_BreakCallback && m_BreakCallback(m_BreakCallbackUserData))
return CounterStatus::End;
// Ignore allocation if will exceed max size for copy
if (m_PassStats.bytesMoved + bytes > m_MaxPassBytes)
{
if (++m_IgnoredAllocs < MAX_ALLOCS_TO_IGNORE)
return CounterStatus::Ignore;
else
return CounterStatus::End;
}
else
m_IgnoredAllocs = 0;
return CounterStatus::Pass;
}
bool VmaDefragmentationContext_T::IncrementCounters(VkDeviceSize bytes)
{
m_PassStats.bytesMoved += bytes;
// Early return when max found
if (++m_PassStats.allocationsMoved >= m_MaxPassAllocations || m_PassStats.bytesMoved >= m_MaxPassBytes)
{
VMA_ASSERT((m_PassStats.allocationsMoved == m_MaxPassAllocations ||
m_PassStats.bytesMoved == m_MaxPassBytes) && "Exceeded maximal pass threshold!");
return true;
}
return false;
}
bool VmaDefragmentationContext_T::ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block)
{
VmaBlockMetadata* metadata = block->m_pMetadata;
for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = metadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, metadata);
// Ignore newly created allocations by defragmentation algorithm
if (moveData.move.srcAllocation->GetUserData() == this)
continue;
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
if (offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
{
VmaAllocationRequest request = {};
if (metadata->CreateAllocationRequest(
moveData.size,
moveData.alignment,
false,
moveData.type,
VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
&request))
{
if (metadata->GetAllocationOffset(request.allocHandle) < offset)
{
if (vector.CommitAllocationRequest(
request,
block,
moveData.alignment,
moveData.flags,
this,
moveData.type,
&moveData.move.dstTmpAllocation) == VK_SUCCESS)
{
m_Moves.push_back(moveData.move);
if (IncrementCounters(moveData.size))
return true;
}
}
}
}
}
return false;
}
bool VmaDefragmentationContext_T::AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector)
{
for (; start < end; ++start)
{
VmaDeviceMemoryBlock* dstBlock = vector.GetBlock(start);
if (dstBlock->m_pMetadata->GetSumFreeSize() >= data.size)
{
if (vector.AllocateFromBlock(dstBlock,
data.size,
data.alignment,
data.flags,
this,
data.type,
0,
&data.move.dstTmpAllocation) == VK_SUCCESS)
{
m_Moves.push_back(data.move);
if (IncrementCounters(data.size))
return true;
break;
}
}
}
return false;
}
bool VmaDefragmentationContext_T::ComputeDefragmentation_Fast(VmaBlockVector& vector)
{
// Move only between blocks
// Go through allocations in last blocks and try to fit them inside first ones
for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
{
VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = metadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, metadata);
// Ignore newly created allocations by defragmentation algorithm
if (moveData.move.srcAllocation->GetUserData() == this)
continue;
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
// Check all previous blocks for free space
if (AllocInOtherBlock(0, i, moveData, vector))
return true;
}
}
return false;
}
bool VmaDefragmentationContext_T::ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update)
{
// Go over every allocation and try to fit it in previous blocks at lowest offsets,
// if not possible: realloc within single block to minimize offset (exclude offset == 0),
// but only if there are noticeable gaps between them (some heuristic, ex. average size of allocation in block)
VMA_ASSERT(m_AlgorithmState != VMA_NULL);
StateBalanced& vectorState = reinterpret_cast<StateBalanced*>(m_AlgorithmState)[index];
if (update && vectorState.avgAllocSize == UINT64_MAX)
UpdateVectorStatistics(vector, vectorState);
const size_t startMoveCount = m_Moves.size();
VkDeviceSize minimalFreeRegion = vectorState.avgFreeSize / 2;
for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
{
VmaDeviceMemoryBlock* block = vector.GetBlock(i);
VmaBlockMetadata* metadata = block->m_pMetadata;
VkDeviceSize prevFreeRegionSize = 0;
for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = metadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, metadata);
// Ignore newly created allocations by defragmentation algorithm
if (moveData.move.srcAllocation->GetUserData() == this)
continue;
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
// Check all previous blocks for free space
const size_t prevMoveCount = m_Moves.size();
if (AllocInOtherBlock(0, i, moveData, vector))
return true;
VkDeviceSize nextFreeRegionSize = metadata->GetNextFreeRegionSize(handle);
// If no room found then realloc within block for lower offset
VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
{
// Check if realloc will make sense
if (prevFreeRegionSize >= minimalFreeRegion ||
nextFreeRegionSize >= minimalFreeRegion ||
moveData.size <= vectorState.avgFreeSize ||
moveData.size <= vectorState.avgAllocSize)
{
VmaAllocationRequest request = {};
if (metadata->CreateAllocationRequest(
moveData.size,
moveData.alignment,
false,
moveData.type,
VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
&request))
{
if (metadata->GetAllocationOffset(request.allocHandle) < offset)
{
if (vector.CommitAllocationRequest(
request,
block,
moveData.alignment,
moveData.flags,
this,
moveData.type,
&moveData.move.dstTmpAllocation) == VK_SUCCESS)
{
m_Moves.push_back(moveData.move);
if (IncrementCounters(moveData.size))
return true;
}
}
}
}
}
prevFreeRegionSize = nextFreeRegionSize;
}
}
// No moves performed, update statistics to current vector state
if (startMoveCount == m_Moves.size() && !update)
{
vectorState.avgAllocSize = UINT64_MAX;
return ComputeDefragmentation_Balanced(vector, index, false);
}
return false;
}
bool VmaDefragmentationContext_T::ComputeDefragmentation_Full(VmaBlockVector& vector)
{
// Go over every allocation and try to fit it in previous blocks at lowest offsets,
// if not possible: realloc within single block to minimize offset (exclude offset == 0)
for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
{
VmaDeviceMemoryBlock* block = vector.GetBlock(i);
VmaBlockMetadata* metadata = block->m_pMetadata;
for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = metadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, metadata);
// Ignore newly created allocations by defragmentation algorithm
if (moveData.move.srcAllocation->GetUserData() == this)
continue;
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
// Check all previous blocks for free space
const size_t prevMoveCount = m_Moves.size();
if (AllocInOtherBlock(0, i, moveData, vector))
return true;
// If no room found then realloc within block for lower offset
VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
{
VmaAllocationRequest request = {};
if (metadata->CreateAllocationRequest(
moveData.size,
moveData.alignment,
false,
moveData.type,
VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
&request))
{
if (metadata->GetAllocationOffset(request.allocHandle) < offset)
{
if (vector.CommitAllocationRequest(
request,
block,
moveData.alignment,
moveData.flags,
this,
moveData.type,
&moveData.move.dstTmpAllocation) == VK_SUCCESS)
{
m_Moves.push_back(moveData.move);
if (IncrementCounters(moveData.size))
return true;
}
}
}
}
}
}
return false;
}
bool VmaDefragmentationContext_T::ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index)
{
// First free single block, then populate it to the brim, then free another block, and so on
// Fallback to previous algorithm since without granularity conflicts it can achieve max packing
if (vector.m_BufferImageGranularity == 1)
return ComputeDefragmentation_Full(vector);
VMA_ASSERT(m_AlgorithmState != VMA_NULL);
StateExtensive& vectorState = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[index];
bool texturePresent = false, bufferPresent = false, otherPresent = false;
switch (vectorState.operation)
{
case StateExtensive::Operation::Done: // Vector defragmented
return false;
case StateExtensive::Operation::FindFreeBlockBuffer:
case StateExtensive::Operation::FindFreeBlockTexture:
case StateExtensive::Operation::FindFreeBlockAll:
{
// No more blocks to free, just perform fast realloc and move to cleanup
if (vectorState.firstFreeBlock == 0)
{
vectorState.operation = StateExtensive::Operation::Cleanup;
return ComputeDefragmentation_Fast(vector);
}
// No free blocks, have to clear last one
size_t last = (vectorState.firstFreeBlock == SIZE_MAX ? vector.GetBlockCount() : vectorState.firstFreeBlock) - 1;
VmaBlockMetadata* freeMetadata = vector.GetBlock(last)->m_pMetadata;
const size_t prevMoveCount = m_Moves.size();
for (VmaAllocHandle handle = freeMetadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = freeMetadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, freeMetadata);
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
// Check all previous blocks for free space
if (AllocInOtherBlock(0, last, moveData, vector))
{
// Full clear performed already
if (prevMoveCount != m_Moves.size() && freeMetadata->GetNextAllocation(handle) == VK_NULL_HANDLE)
vectorState.firstFreeBlock = last;
return true;
}
}
if (prevMoveCount == m_Moves.size())
{
// Cannot perform full clear, have to move data in other blocks around
if (last != 0)
{
for (size_t i = last - 1; i; --i)
{
if (ReallocWithinBlock(vector, vector.GetBlock(i)))
return true;
}
}
if (prevMoveCount == m_Moves.size())
{
// No possible reallocs within blocks, try to move them around fast
return ComputeDefragmentation_Fast(vector);
}
}
else
{
switch (vectorState.operation)
{
case StateExtensive::Operation::FindFreeBlockBuffer:
vectorState.operation = StateExtensive::Operation::MoveBuffers;
break;
case StateExtensive::Operation::FindFreeBlockTexture:
vectorState.operation = StateExtensive::Operation::MoveTextures;
break;
case StateExtensive::Operation::FindFreeBlockAll:
vectorState.operation = StateExtensive::Operation::MoveAll;
break;
default:
VMA_ASSERT(0);
vectorState.operation = StateExtensive::Operation::MoveTextures;
}
vectorState.firstFreeBlock = last;
// Nothing done, block found without reallocations, can perform another reallocs in same pass
return ComputeDefragmentation_Extensive(vector, index);
}
break;
}
case StateExtensive::Operation::MoveTextures:
{
if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL, vector,
vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
{
if (texturePresent)
{
vectorState.operation = StateExtensive::Operation::FindFreeBlockTexture;
return ComputeDefragmentation_Extensive(vector, index);
}
if (!bufferPresent && !otherPresent)
{
vectorState.operation = StateExtensive::Operation::Cleanup;
break;
}
// No more textures to move, check buffers
vectorState.operation = StateExtensive::Operation::MoveBuffers;
bufferPresent = false;
otherPresent = false;
}
else
break;
VMA_FALLTHROUGH; // Fallthrough
}
case StateExtensive::Operation::MoveBuffers:
{
if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_BUFFER, vector,
vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
{
if (bufferPresent)
{
vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
return ComputeDefragmentation_Extensive(vector, index);
}
if (!otherPresent)
{
vectorState.operation = StateExtensive::Operation::Cleanup;
break;
}
// No more buffers to move, check all others
vectorState.operation = StateExtensive::Operation::MoveAll;
otherPresent = false;
}
else
break;
VMA_FALLTHROUGH; // Fallthrough
}
case StateExtensive::Operation::MoveAll:
{
if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_FREE, vector,
vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
{
if (otherPresent)
{
vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
return ComputeDefragmentation_Extensive(vector, index);
}
// Everything moved
vectorState.operation = StateExtensive::Operation::Cleanup;
}
break;
}
case StateExtensive::Operation::Cleanup:
// Cleanup is handled below so that other operations may reuse the cleanup code. This case is here to prevent the unhandled enum value warning (C4062).
break;
}
if (vectorState.operation == StateExtensive::Operation::Cleanup)
{
// All other work done, pack data in blocks even tighter if possible
const size_t prevMoveCount = m_Moves.size();
for (size_t i = 0; i < vector.GetBlockCount(); ++i)
{
if (ReallocWithinBlock(vector, vector.GetBlock(i)))
return true;
}
if (prevMoveCount == m_Moves.size())
vectorState.operation = StateExtensive::Operation::Done;
}
return false;
}
void VmaDefragmentationContext_T::UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state)
{
size_t allocCount = 0;
size_t freeCount = 0;
state.avgFreeSize = 0;
state.avgAllocSize = 0;
for (size_t i = 0; i < vector.GetBlockCount(); ++i)
{
VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
allocCount += metadata->GetAllocationCount();
freeCount += metadata->GetFreeRegionsCount();
state.avgFreeSize += metadata->GetSumFreeSize();
state.avgAllocSize += metadata->GetSize();
}
state.avgAllocSize = (state.avgAllocSize - state.avgFreeSize) / allocCount;
state.avgFreeSize /= freeCount;
}
bool VmaDefragmentationContext_T::MoveDataToFreeBlocks(VmaSuballocationType currentType,
VmaBlockVector& vector, size_t firstFreeBlock,
bool& texturePresent, bool& bufferPresent, bool& otherPresent)
{
const size_t prevMoveCount = m_Moves.size();
for (size_t i = firstFreeBlock ; i;)
{
VmaDeviceMemoryBlock* block = vector.GetBlock(--i);
VmaBlockMetadata* metadata = block->m_pMetadata;
for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
handle != VK_NULL_HANDLE;
handle = metadata->GetNextAllocation(handle))
{
MoveAllocationData moveData = GetMoveData(handle, metadata);
// Ignore newly created allocations by defragmentation algorithm
if (moveData.move.srcAllocation->GetUserData() == this)
continue;
switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
{
case CounterStatus::Ignore:
continue;
case CounterStatus::End:
return true;
case CounterStatus::Pass:
break;
default:
VMA_ASSERT(0);
}
// Move only single type of resources at once
if (!VmaIsBufferImageGranularityConflict(moveData.type, currentType))
{
// Try to fit allocation into free blocks
if (AllocInOtherBlock(firstFreeBlock, vector.GetBlockCount(), moveData, vector))
return false;
}
if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL))
texturePresent = true;
else if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_BUFFER))
bufferPresent = true;
else
otherPresent = true;
}
}
return prevMoveCount == m_Moves.size();
}
#endif // _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
#ifndef _VMA_POOL_T_FUNCTIONS
VmaPool_T::VmaPool_T(
VmaAllocator hAllocator,
const VmaPoolCreateInfo& createInfo,
VkDeviceSize preferredBlockSize)
: m_BlockVector(
hAllocator,
this, // hParentPool
createInfo.memoryTypeIndex,
createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
createInfo.minBlockCount,
createInfo.maxBlockCount,
(createInfo.flags& VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
createInfo.blockSize != 0, // explicitBlockSize
createInfo.flags & VMA_POOL_CREATE_ALGORITHM_MASK, // algorithm
createInfo.priority,
VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
createInfo.pMemoryAllocateNext),
m_Id(0),
m_Name(VMA_NULL) {}
VmaPool_T::~VmaPool_T()
{
VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
}
void VmaPool_T::SetName(const char* pName)
{
const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
VmaFreeString(allocs, m_Name);
if (pName != VMA_NULL)
{
m_Name = VmaCreateStringCopy(allocs, pName);
}
else
{
m_Name = VMA_NULL;
}
}
#endif // _VMA_POOL_T_FUNCTIONS
#ifndef _VMA_ALLOCATOR_T_FUNCTIONS
VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
m_UseKhrBindMemory2((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0),
m_UseExtMemoryBudget((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0),
m_UseAmdDeviceCoherentMemory((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT) != 0),
m_UseKhrBufferDeviceAddress((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT) != 0),
m_UseExtMemoryPriority((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT) != 0),
m_hDevice(pCreateInfo->device),
m_hInstance(pCreateInfo->instance),
m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
*pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
m_AllocationObjectAllocator(&m_AllocationCallbacks),
m_HeapSizeLimitMask(0),
m_DeviceMemoryCount(0),
m_PreferredLargeHeapBlockSize(0),
m_PhysicalDevice(pCreateInfo->physicalDevice),
m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
m_NextPoolId(0),
m_GlobalMemoryTypeBits(UINT32_MAX)
{
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
m_UseKhrDedicatedAllocation = false;
m_UseKhrBindMemory2 = false;
}
if(VMA_DEBUG_DETECT_CORRUPTION)
{
// Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
}
VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device && pCreateInfo->instance);
if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
{
#if !(VMA_DEDICATED_ALLOCATION)
if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0)
{
VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
}
#endif
#if !(VMA_BIND_MEMORY2)
if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0)
{
VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
}
#endif
}
#if !(VMA_MEMORY_BUDGET)
if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0)
{
VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
}
#endif
#if !(VMA_BUFFER_DEVICE_ADDRESS)
if(m_UseKhrBufferDeviceAddress)
{
VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
}
#endif
#if VMA_VULKAN_VERSION < 1003000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
{
VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_3 but required Vulkan version is disabled by preprocessor macros.");
}
#endif
#if VMA_VULKAN_VERSION < 1002000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 2, 0))
{
VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
}
#endif
#if VMA_VULKAN_VERSION < 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
}
#endif
#if !(VMA_MEMORY_PRIORITY)
if(m_UseExtMemoryPriority)
{
VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
}
#endif
memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
memset(&m_MemProps, 0, sizeof(m_MemProps));
memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
memset(&m_VulkanFunctions, 0, sizeof(m_VulkanFunctions));
#if VMA_EXTERNAL_MEMORY
memset(&m_TypeExternalMemoryHandleTypes, 0, sizeof(m_TypeExternalMemoryHandleTypes));
#endif // #if VMA_EXTERNAL_MEMORY
if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
{
m_DeviceMemoryCallbacks.pUserData = pCreateInfo->pDeviceMemoryCallbacks->pUserData;
m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
}
ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
(*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
(*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
#if VMA_EXTERNAL_MEMORY
if(pCreateInfo->pTypeExternalMemoryHandleTypes != VMA_NULL)
{
memcpy(m_TypeExternalMemoryHandleTypes, pCreateInfo->pTypeExternalMemoryHandleTypes,
sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
}
#endif // #if VMA_EXTERNAL_MEMORY
if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
{
for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
{
const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
if(limit != VK_WHOLE_SIZE)
{
m_HeapSizeLimitMask |= 1u << heapIndex;
if(limit < m_MemProps.memoryHeaps[heapIndex].size)
{
m_MemProps.memoryHeaps[heapIndex].size = limit;
}
}
}
}
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
// Create only supported types
if((m_GlobalMemoryTypeBits & (1u << memTypeIndex)) != 0)
{
const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
this,
VK_NULL_HANDLE, // hParentPool
memTypeIndex,
preferredBlockSize,
0,
SIZE_MAX,
GetBufferImageGranularity(),
false, // explicitBlockSize
0, // algorithm
0.5f, // priority (0.5 is the default per Vulkan spec)
GetMemoryTypeMinAlignment(memTypeIndex), // minAllocationAlignment
VMA_NULL); // // pMemoryAllocateNext
// No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
// becase minBlockCount is 0.
}
}
}
VkResult VmaAllocator_T::Init(const VmaAllocatorCreateInfo* pCreateInfo)
{
VkResult res = VK_SUCCESS;
#if VMA_MEMORY_BUDGET
if(m_UseExtMemoryBudget)
{
UpdateVulkanBudget();
}
#endif // #if VMA_MEMORY_BUDGET
return res;
}
VmaAllocator_T::~VmaAllocator_T()
{
VMA_ASSERT(m_Pools.IsEmpty());
for(size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
{
vma_delete(this, m_pBlockVectors[memTypeIndex]);
}
}
void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
{
#if VMA_STATIC_VULKAN_FUNCTIONS == 1
ImportVulkanFunctions_Static();
#endif
if(pVulkanFunctions != VMA_NULL)
{
ImportVulkanFunctions_Custom(pVulkanFunctions);
}
#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
ImportVulkanFunctions_Dynamic();
#endif
ValidateVulkanFunctions();
}
#if VMA_STATIC_VULKAN_FUNCTIONS == 1
void VmaAllocator_T::ImportVulkanFunctions_Static()
{
// Vulkan 1.0
m_VulkanFunctions.vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr)vkGetInstanceProcAddr;
m_VulkanFunctions.vkGetDeviceProcAddr = (PFN_vkGetDeviceProcAddr)vkGetDeviceProcAddr;
m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
// Vulkan 1.1
#if VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
}
#endif
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
}
#endif
#if VMA_VULKAN_VERSION >= 1003000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
{
m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements = (PFN_vkGetDeviceBufferMemoryRequirements)vkGetDeviceBufferMemoryRequirements;
m_VulkanFunctions.vkGetDeviceImageMemoryRequirements = (PFN_vkGetDeviceImageMemoryRequirements)vkGetDeviceImageMemoryRequirements;
}
#endif
}
#endif // VMA_STATIC_VULKAN_FUNCTIONS == 1
void VmaAllocator_T::ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions)
{
VMA_ASSERT(pVulkanFunctions != VMA_NULL);
#define VMA_COPY_IF_NOT_NULL(funcName) \
if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
VMA_COPY_IF_NOT_NULL(vkGetInstanceProcAddr);
VMA_COPY_IF_NOT_NULL(vkGetDeviceProcAddr);
VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
VMA_COPY_IF_NOT_NULL(vkFreeMemory);
VMA_COPY_IF_NOT_NULL(vkMapMemory);
VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
VMA_COPY_IF_NOT_NULL(vkCreateImage);
VMA_COPY_IF_NOT_NULL(vkDestroyImage);
VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
#endif
#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
#endif
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
#endif
#if VMA_VULKAN_VERSION >= 1003000
VMA_COPY_IF_NOT_NULL(vkGetDeviceBufferMemoryRequirements);
VMA_COPY_IF_NOT_NULL(vkGetDeviceImageMemoryRequirements);
#endif
#undef VMA_COPY_IF_NOT_NULL
}
#if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
{
VMA_ASSERT(m_VulkanFunctions.vkGetInstanceProcAddr && m_VulkanFunctions.vkGetDeviceProcAddr &&
"To use VMA_DYNAMIC_VULKAN_FUNCTIONS in new versions of VMA you now have to pass "
"VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as VmaAllocatorCreateInfo::pVulkanFunctions. "
"Other members can be null.");
#define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
if(m_VulkanFunctions.memberName == VMA_NULL) \
m_VulkanFunctions.memberName = \
(functionPointerType)m_VulkanFunctions.vkGetInstanceProcAddr(m_hInstance, functionNameString);
#define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
if(m_VulkanFunctions.memberName == VMA_NULL) \
m_VulkanFunctions.memberName = \
(functionPointerType)m_VulkanFunctions.vkGetDeviceProcAddr(m_hDevice, functionNameString);
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties, "vkGetPhysicalDeviceProperties");
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties, "vkGetPhysicalDeviceMemoryProperties");
VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory, "vkAllocateMemory");
VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory, "vkFreeMemory");
VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory, "vkMapMemory");
VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory, "vkUnmapMemory");
VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges, "vkFlushMappedMemoryRanges");
VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges, "vkInvalidateMappedMemoryRanges");
VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory, "vkBindBufferMemory");
VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory, "vkBindImageMemory");
VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements, "vkGetBufferMemoryRequirements");
VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements, "vkGetImageMemoryRequirements");
VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer, "vkCreateBuffer");
VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer, "vkDestroyBuffer");
VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage, "vkCreateImage");
VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage, "vkDestroyImage");
VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer, "vkCmdCopyBuffer");
#if VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2, "vkGetBufferMemoryRequirements2");
VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2, "vkGetImageMemoryRequirements2");
VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2, "vkBindBufferMemory2");
VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2, "vkBindImageMemory2");
}
#endif
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2, "vkGetPhysicalDeviceMemoryProperties2");
}
else if(m_UseExtMemoryBudget)
{
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2, "vkGetPhysicalDeviceMemoryProperties2KHR");
}
#endif
#if VMA_DEDICATED_ALLOCATION
if(m_UseKhrDedicatedAllocation)
{
VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR, "vkGetBufferMemoryRequirements2KHR");
VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR, "vkGetImageMemoryRequirements2KHR");
}
#endif
#if VMA_BIND_MEMORY2
if(m_UseKhrBindMemory2)
{
VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR, "vkBindBufferMemory2KHR");
VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR, "vkBindImageMemory2KHR");
}
#endif // #if VMA_BIND_MEMORY2
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
}
else if(m_UseExtMemoryBudget)
{
VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
}
#endif // #if VMA_MEMORY_BUDGET
#if VMA_VULKAN_VERSION >= 1003000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
{
VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirements, "vkGetDeviceBufferMemoryRequirements");
VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirements, "vkGetDeviceImageMemoryRequirements");
}
#endif
#undef VMA_FETCH_DEVICE_FUNC
#undef VMA_FETCH_INSTANCE_FUNC
}
#endif // VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
void VmaAllocator_T::ValidateVulkanFunctions()
{
VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
{
VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
}
#endif
#if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
{
VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
}
#endif
#if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
}
#endif
#if VMA_VULKAN_VERSION >= 1003000
if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
{
VMA_ASSERT(m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements != VMA_NULL);
VMA_ASSERT(m_VulkanFunctions.vkGetDeviceImageMemoryRequirements != VMA_NULL);
}
#endif
}
VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
{
const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
return VmaAlignUp(isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, (VkDeviceSize)32);
}
VkResult VmaAllocator_T::AllocateMemoryOfType(
VmaPool pool,
VkDeviceSize size,
VkDeviceSize alignment,
bool dedicatedPreferred,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage,
const VmaAllocationCreateInfo& createInfo,
uint32_t memTypeIndex,
VmaSuballocationType suballocType,
VmaDedicatedAllocationList& dedicatedAllocations,
VmaBlockVector& blockVector,
size_t allocationCount,
VmaAllocation* pAllocations)
{
VMA_ASSERT(pAllocations != VMA_NULL);
VMA_DEBUG_LOG_FORMAT(" AllocateMemory: MemoryTypeIndex=%u, AllocationCount=%zu, Size=%llu", memTypeIndex, allocationCount, size);
VmaAllocationCreateInfo finalCreateInfo = createInfo;
VkResult res = CalcMemTypeParams(
finalCreateInfo,
memTypeIndex,
size,
allocationCount);
if(res != VK_SUCCESS)
return res;
if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
{
return AllocateDedicatedMemory(
pool,
size,
suballocType,
dedicatedAllocations,
memTypeIndex,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
(finalCreateInfo.flags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
finalCreateInfo.pUserData,
finalCreateInfo.priority,
dedicatedBuffer,
dedicatedImage,
dedicatedBufferImageUsage,
allocationCount,
pAllocations,
blockVector.GetAllocationNextPtr());
}
else
{
const bool canAllocateDedicated =
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
(pool == VK_NULL_HANDLE || !blockVector.HasExplicitBlockSize());
if(canAllocateDedicated)
{
// Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
if(size > blockVector.GetPreferredBlockSize() / 2)
{
dedicatedPreferred = true;
}
// Protection against creating each allocation as dedicated when we reach or exceed heap size/budget,
// which can quickly deplete maxMemoryAllocationCount: Don't prefer dedicated allocations when above
// 3/4 of the maximum allocation count.
if(m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount < UINT32_MAX / 4 &&
m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
{
dedicatedPreferred = false;
}
if(dedicatedPreferred)
{
res = AllocateDedicatedMemory(
pool,
size,
suballocType,
dedicatedAllocations,
memTypeIndex,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
(finalCreateInfo.flags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
finalCreateInfo.pUserData,
finalCreateInfo.priority,
dedicatedBuffer,
dedicatedImage,
dedicatedBufferImageUsage,
allocationCount,
pAllocations,
blockVector.GetAllocationNextPtr());
if(res == VK_SUCCESS)
{
// Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
return VK_SUCCESS;
}
}
}
res = blockVector.Allocate(
size,
alignment,
finalCreateInfo,
suballocType,
allocationCount,
pAllocations);
if(res == VK_SUCCESS)
return VK_SUCCESS;
// Try dedicated memory.
if(canAllocateDedicated && !dedicatedPreferred)
{
res = AllocateDedicatedMemory(
pool,
size,
suballocType,
dedicatedAllocations,
memTypeIndex,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
(finalCreateInfo.flags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
(finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
finalCreateInfo.pUserData,
finalCreateInfo.priority,
dedicatedBuffer,
dedicatedImage,
dedicatedBufferImageUsage,
allocationCount,
pAllocations,
blockVector.GetAllocationNextPtr());
if(res == VK_SUCCESS)
{
// Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
return VK_SUCCESS;
}
}
// Everything failed: Return error code.
VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
return res;
}
}
VkResult VmaAllocator_T::AllocateDedicatedMemory(
VmaPool pool,
VkDeviceSize size,
VmaSuballocationType suballocType,
VmaDedicatedAllocationList& dedicatedAllocations,
uint32_t memTypeIndex,
bool map,
bool isUserDataString,
bool isMappingAllowed,
bool canAliasMemory,
void* pUserData,
float priority,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage,
size_t allocationCount,
VmaAllocation* pAllocations,
const void* pNextChain)
{
VMA_ASSERT(allocationCount > 0 && pAllocations);
VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
allocInfo.memoryTypeIndex = memTypeIndex;
allocInfo.allocationSize = size;
allocInfo.pNext = pNextChain;
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
if(!canAliasMemory)
{
if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
if(dedicatedBuffer != VK_NULL_HANDLE)
{
VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
dedicatedAllocInfo.buffer = dedicatedBuffer;
VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
}
else if(dedicatedImage != VK_NULL_HANDLE)
{
dedicatedAllocInfo.image = dedicatedImage;
VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
}
}
}
#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
#if VMA_BUFFER_DEVICE_ADDRESS
VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
if(m_UseKhrBufferDeviceAddress)
{
bool canContainBufferWithDeviceAddress = true;
if(dedicatedBuffer != VK_NULL_HANDLE)
{
canContainBufferWithDeviceAddress = dedicatedBufferImageUsage == UINT32_MAX || // Usage flags unknown
(dedicatedBufferImageUsage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT) != 0;
}
else if(dedicatedImage != VK_NULL_HANDLE)
{
canContainBufferWithDeviceAddress = false;
}
if(canContainBufferWithDeviceAddress)
{
allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
}
}
#endif // #if VMA_BUFFER_DEVICE_ADDRESS
#if VMA_MEMORY_PRIORITY
VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
if(m_UseExtMemoryPriority)
{
VMA_ASSERT(priority >= 0.f && priority <= 1.f);
priorityInfo.priority = priority;
VmaPnextChainPushFront(&allocInfo, &priorityInfo);
}
#endif // #if VMA_MEMORY_PRIORITY
#if VMA_EXTERNAL_MEMORY
// Attach VkExportMemoryAllocateInfoKHR if necessary.
VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
if(exportMemoryAllocInfo.handleTypes != 0)
{
VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
}
#endif // #if VMA_EXTERNAL_MEMORY
size_t allocIndex;
VkResult res = VK_SUCCESS;
for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
{
res = AllocateDedicatedMemoryPage(
pool,
size,
suballocType,
memTypeIndex,
allocInfo,
map,
isUserDataString,
isMappingAllowed,
pUserData,
pAllocations + allocIndex);
if(res != VK_SUCCESS)
{
break;
}
}
if(res == VK_SUCCESS)
{
for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
{
dedicatedAllocations.Register(pAllocations[allocIndex]);
}
VMA_DEBUG_LOG_FORMAT(" Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%u", allocationCount, memTypeIndex);
}
else
{
// Free all already created allocations.
while(allocIndex--)
{
VmaAllocation currAlloc = pAllocations[allocIndex];
VkDeviceMemory hMemory = currAlloc->GetMemory();
/*
There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
before vkFreeMemory.
if(currAlloc->GetMappedData() != VMA_NULL)
{
(*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
}
*/
FreeVulkanMemory(memTypeIndex, currAlloc->GetSize(), hMemory);
m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), currAlloc->GetSize());
m_AllocationObjectAllocator.Free(currAlloc);
}
memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
}
return res;
}
VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
VmaPool pool,
VkDeviceSize size,
VmaSuballocationType suballocType,
uint32_t memTypeIndex,
const VkMemoryAllocateInfo& allocInfo,
bool map,
bool isUserDataString,
bool isMappingAllowed,
void* pUserData,
VmaAllocation* pAllocation)
{
VkDeviceMemory hMemory = VK_NULL_HANDLE;
VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
if(res < 0)
{
VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
return res;
}
void* pMappedData = VMA_NULL;
if(map)
{
res = (*m_VulkanFunctions.vkMapMemory)(
m_hDevice,
hMemory,
0,
VK_WHOLE_SIZE,
0,
&pMappedData);
if(res < 0)
{
VMA_DEBUG_LOG(" vkMapMemory FAILED");
FreeVulkanMemory(memTypeIndex, size, hMemory);
return res;
}
}
*pAllocation = m_AllocationObjectAllocator.Allocate(isMappingAllowed);
(*pAllocation)->InitDedicatedAllocation(pool, memTypeIndex, hMemory, suballocType, pMappedData, size);
if (isUserDataString)
(*pAllocation)->SetName(this, (const char*)pUserData);
else
(*pAllocation)->SetUserData(this, pUserData);
m_Budget.AddAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), size);
if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
{
FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
}
return VK_SUCCESS;
}
void VmaAllocator_T::GetBufferMemoryRequirements(
VkBuffer hBuffer,
VkMemoryRequirements& memReq,
bool& requiresDedicatedAllocation,
bool& prefersDedicatedAllocation) const
{
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
memReqInfo.buffer = hBuffer;
VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
(*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
memReq = memReq2.memoryRequirements;
requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
}
else
#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
{
(*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
requiresDedicatedAllocation = false;
prefersDedicatedAllocation = false;
}
}
void VmaAllocator_T::GetImageMemoryRequirements(
VkImage hImage,
VkMemoryRequirements& memReq,
bool& requiresDedicatedAllocation,
bool& prefersDedicatedAllocation) const
{
#if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
{
VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
memReqInfo.image = hImage;
VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
(*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
memReq = memReq2.memoryRequirements;
requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
}
else
#endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
{
(*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
requiresDedicatedAllocation = false;
prefersDedicatedAllocation = false;
}
}
VkResult VmaAllocator_T::FindMemoryTypeIndex(
uint32_t memoryTypeBits,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
VkFlags bufImgUsage,
uint32_t* pMemoryTypeIndex) const
{
memoryTypeBits &= GetGlobalMemoryTypeBits();
if(pAllocationCreateInfo->memoryTypeBits != 0)
{
memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
}
VkMemoryPropertyFlags requiredFlags = 0, preferredFlags = 0, notPreferredFlags = 0;
if(!FindMemoryPreferences(
IsIntegratedGpu(),
*pAllocationCreateInfo,
bufImgUsage,
requiredFlags, preferredFlags, notPreferredFlags))
{
return VK_ERROR_FEATURE_NOT_PRESENT;
}
*pMemoryTypeIndex = UINT32_MAX;
uint32_t minCost = UINT32_MAX;
for(uint32_t memTypeIndex = 0, memTypeBit = 1;
memTypeIndex < GetMemoryTypeCount();
++memTypeIndex, memTypeBit <<= 1)
{
// This memory type is acceptable according to memoryTypeBits bitmask.
if((memTypeBit & memoryTypeBits) != 0)
{
const VkMemoryPropertyFlags currFlags =
m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
// This memory type contains requiredFlags.
if((requiredFlags & ~currFlags) == 0)
{
// Calculate cost as number of bits from preferredFlags not present in this memory type.
uint32_t currCost = VMA_COUNT_BITS_SET(preferredFlags & ~currFlags) +
VMA_COUNT_BITS_SET(currFlags & notPreferredFlags);
// Remember memory type with lowest cost.
if(currCost < minCost)
{
*pMemoryTypeIndex = memTypeIndex;
if(currCost == 0)
{
return VK_SUCCESS;
}
minCost = currCost;
}
}
}
}
return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
}
VkResult VmaAllocator_T::CalcMemTypeParams(
VmaAllocationCreateInfo& inoutCreateInfo,
uint32_t memTypeIndex,
VkDeviceSize size,
size_t allocationCount)
{
// If memory type is not HOST_VISIBLE, disable MAPPED.
if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
(m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
{
inoutCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
}
if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
(inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT) != 0)
{
const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
VmaBudget heapBudget = {};
GetHeapBudgets(&heapBudget, heapIndex, 1);
if(heapBudget.usage + size * allocationCount > heapBudget.budget)
{
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
}
return VK_SUCCESS;
}
VkResult VmaAllocator_T::CalcAllocationParams(
VmaAllocationCreateInfo& inoutCreateInfo,
bool dedicatedRequired,
bool dedicatedPreferred)
{
VMA_ASSERT((inoutCreateInfo.flags &
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) !=
(VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) &&
"Specifying both flags VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT and VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT is incorrect.");
VMA_ASSERT((((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) == 0 ||
(inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0)) &&
"Specifying VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT requires also VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
if(inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
{
if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0)
{
VMA_ASSERT((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0 &&
"When using VMA_ALLOCATION_CREATE_MAPPED_BIT and usage = VMA_MEMORY_USAGE_AUTO*, you must also specify VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
}
}
// If memory is lazily allocated, it should be always dedicated.
if(dedicatedRequired ||
inoutCreateInfo.usage == VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED)
{
inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
}
if(inoutCreateInfo.pool != VK_NULL_HANDLE)
{
if(inoutCreateInfo.pool->m_BlockVector.HasExplicitBlockSize() &&
(inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
{
VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT while current custom pool doesn't support dedicated allocations.");
return VK_ERROR_FEATURE_NOT_PRESENT;
}
inoutCreateInfo.priority = inoutCreateInfo.pool->m_BlockVector.GetPriority();
}
if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
(inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
{
VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
return VK_ERROR_FEATURE_NOT_PRESENT;
}
if(VMA_DEBUG_ALWAYS_DEDICATED_MEMORY &&
(inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
{
inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
}
// Non-auto USAGE values imply HOST_ACCESS flags.
// And so does VMA_MEMORY_USAGE_UNKNOWN because it is used with custom pools.
// Which specific flag is used doesn't matter. They change things only when used with VMA_MEMORY_USAGE_AUTO*.
// Otherwise they just protect from assert on mapping.
if(inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO &&
inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE &&
inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
{
if((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) == 0)
{
inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
}
}
return VK_SUCCESS;
}
VkResult VmaAllocator_T::AllocateMemory(
const VkMemoryRequirements& vkMemReq,
bool requiresDedicatedAllocation,
bool prefersDedicatedAllocation,
VkBuffer dedicatedBuffer,
VkImage dedicatedImage,
VkFlags dedicatedBufferImageUsage,
const VmaAllocationCreateInfo& createInfo,
VmaSuballocationType suballocType,
size_t allocationCount,
VmaAllocation* pAllocations)
{
memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
if(vkMemReq.size == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
VmaAllocationCreateInfo createInfoFinal = createInfo;
VkResult res = CalcAllocationParams(createInfoFinal, requiresDedicatedAllocation, prefersDedicatedAllocation);
if(res != VK_SUCCESS)
return res;
if(createInfoFinal.pool != VK_NULL_HANDLE)
{
VmaBlockVector& blockVector = createInfoFinal.pool->m_BlockVector;
return AllocateMemoryOfType(
createInfoFinal.pool,
vkMemReq.size,
vkMemReq.alignment,
prefersDedicatedAllocation,
dedicatedBuffer,
dedicatedImage,
dedicatedBufferImageUsage,
createInfoFinal,
blockVector.GetMemoryTypeIndex(),
suballocType,
createInfoFinal.pool->m_DedicatedAllocations,
blockVector,
allocationCount,
pAllocations);
}
else
{
// Bit mask of memory Vulkan types acceptable for this allocation.
uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
uint32_t memTypeIndex = UINT32_MAX;
res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
// Can't find any single memory type matching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
if(res != VK_SUCCESS)
return res;
do
{
VmaBlockVector* blockVector = m_pBlockVectors[memTypeIndex];
VMA_ASSERT(blockVector && "Trying to use unsupported memory type!");
res = AllocateMemoryOfType(
VK_NULL_HANDLE,
vkMemReq.size,
vkMemReq.alignment,
requiresDedicatedAllocation || prefersDedicatedAllocation,
dedicatedBuffer,
dedicatedImage,
dedicatedBufferImageUsage,
createInfoFinal,
memTypeIndex,
suballocType,
m_DedicatedAllocations[memTypeIndex],
*blockVector,
allocationCount,
pAllocations);
// Allocation succeeded
if(res == VK_SUCCESS)
return VK_SUCCESS;
// Remove old memTypeIndex from list of possibilities.
memoryTypeBits &= ~(1u << memTypeIndex);
// Find alternative memTypeIndex.
res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
} while(res == VK_SUCCESS);
// No other matching memory type index could be found.
// Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
}
void VmaAllocator_T::FreeMemory(
size_t allocationCount,
const VmaAllocation* pAllocations)
{
VMA_ASSERT(pAllocations);
for(size_t allocIndex = allocationCount; allocIndex--; )
{
VmaAllocation allocation = pAllocations[allocIndex];
if(allocation != VK_NULL_HANDLE)
{
if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
{
FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
}
allocation->FreeName(this);
switch(allocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
VmaBlockVector* pBlockVector = VMA_NULL;
VmaPool hPool = allocation->GetParentPool();
if(hPool != VK_NULL_HANDLE)
{
pBlockVector = &hPool->m_BlockVector;
}
else
{
const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
pBlockVector = m_pBlockVectors[memTypeIndex];
VMA_ASSERT(pBlockVector && "Trying to free memory of unsupported type!");
}
pBlockVector->Free(allocation);
}
break;
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
FreeDedicatedMemory(allocation);
break;
default:
VMA_ASSERT(0);
}
}
}
}
void VmaAllocator_T::CalculateStatistics(VmaTotalStatistics* pStats)
{
// Initialize.
VmaClearDetailedStatistics(pStats->total);
for(uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
VmaClearDetailedStatistics(pStats->memoryType[i]);
for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
VmaClearDetailedStatistics(pStats->memoryHeap[i]);
// Process default pools.
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
if (pBlockVector != VMA_NULL)
pBlockVector->AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
}
// Process custom pools.
{
VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
{
VmaBlockVector& blockVector = pool->m_BlockVector;
const uint32_t memTypeIndex = blockVector.GetMemoryTypeIndex();
blockVector.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
pool->m_DedicatedAllocations.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
}
}
// Process dedicated allocations.
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
m_DedicatedAllocations[memTypeIndex].AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
}
// Sum from memory types to memory heaps.
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
const uint32_t memHeapIndex = m_MemProps.memoryTypes[memTypeIndex].heapIndex;
VmaAddDetailedStatistics(pStats->memoryHeap[memHeapIndex], pStats->memoryType[memTypeIndex]);
}
// Sum from memory heaps to total.
for(uint32_t memHeapIndex = 0; memHeapIndex < GetMemoryHeapCount(); ++memHeapIndex)
VmaAddDetailedStatistics(pStats->total, pStats->memoryHeap[memHeapIndex]);
VMA_ASSERT(pStats->total.statistics.allocationCount == 0 ||
pStats->total.allocationSizeMax >= pStats->total.allocationSizeMin);
VMA_ASSERT(pStats->total.unusedRangeCount == 0 ||
pStats->total.unusedRangeSizeMax >= pStats->total.unusedRangeSizeMin);
}
void VmaAllocator_T::GetHeapBudgets(VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount)
{
#if VMA_MEMORY_BUDGET
if(m_UseExtMemoryBudget)
{
if(m_Budget.m_OperationsSinceBudgetFetch < 30)
{
VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
{
const uint32_t heapIndex = firstHeap + i;
outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
if(m_Budget.m_VulkanUsage[heapIndex] + outBudgets->statistics.blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
{
outBudgets->usage = m_Budget.m_VulkanUsage[heapIndex] +
outBudgets->statistics.blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
}
else
{
outBudgets->usage = 0;
}
// Have to take MIN with heap size because explicit HeapSizeLimit is included in it.
outBudgets->budget = VMA_MIN(
m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
}
}
else
{
UpdateVulkanBudget(); // Outside of mutex lock
GetHeapBudgets(outBudgets, firstHeap, heapCount); // Recursion
}
}
else
#endif
{
for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
{
const uint32_t heapIndex = firstHeap + i;
outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
outBudgets->usage = outBudgets->statistics.blockBytes;
outBudgets->budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
}
}
}
void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
{
pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
pAllocationInfo->deviceMemory = hAllocation->GetMemory();
pAllocationInfo->offset = hAllocation->GetOffset();
pAllocationInfo->size = hAllocation->GetSize();
pAllocationInfo->pMappedData = hAllocation->GetMappedData();
pAllocationInfo->pUserData = hAllocation->GetUserData();
pAllocationInfo->pName = hAllocation->GetName();
}
VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
{
VMA_DEBUG_LOG_FORMAT(" CreatePool: MemoryTypeIndex=%u, flags=%u", pCreateInfo->memoryTypeIndex, pCreateInfo->flags);
VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
// Protection against uninitialized new structure member. If garbage data are left there, this pointer dereference would crash.
if(pCreateInfo->pMemoryAllocateNext)
{
VMA_ASSERT(((const VkBaseInStructure*)pCreateInfo->pMemoryAllocateNext)->sType != 0);
}
if(newCreateInfo.maxBlockCount == 0)
{
newCreateInfo.maxBlockCount = SIZE_MAX;
}
if(newCreateInfo.minBlockCount > newCreateInfo.maxBlockCount)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
// Memory type index out of range or forbidden.
if(pCreateInfo->memoryTypeIndex >= GetMemoryTypeCount() ||
((1u << pCreateInfo->memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
{
return VK_ERROR_FEATURE_NOT_PRESENT;
}
if(newCreateInfo.minAllocationAlignment > 0)
{
VMA_ASSERT(VmaIsPow2(newCreateInfo.minAllocationAlignment));
}
const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
*pPool = vma_new(this, VmaPool_T)(this, newCreateInfo, preferredBlockSize);
VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
if(res != VK_SUCCESS)
{
vma_delete(this, *pPool);
*pPool = VMA_NULL;
return res;
}
// Add to m_Pools.
{
VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
(*pPool)->SetId(m_NextPoolId++);
m_Pools.PushBack(*pPool);
}
return VK_SUCCESS;
}
void VmaAllocator_T::DestroyPool(VmaPool pool)
{
// Remove from m_Pools.
{
VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
m_Pools.Remove(pool);
}
vma_delete(this, pool);
}
void VmaAllocator_T::GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats)
{
VmaClearStatistics(*pPoolStats);
pool->m_BlockVector.AddStatistics(*pPoolStats);
pool->m_DedicatedAllocations.AddStatistics(*pPoolStats);
}
void VmaAllocator_T::CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats)
{
VmaClearDetailedStatistics(*pPoolStats);
pool->m_BlockVector.AddDetailedStatistics(*pPoolStats);
pool->m_DedicatedAllocations.AddDetailedStatistics(*pPoolStats);
}
void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
{
m_CurrentFrameIndex.store(frameIndex);
#if VMA_MEMORY_BUDGET
if(m_UseExtMemoryBudget)
{
UpdateVulkanBudget();
}
#endif // #if VMA_MEMORY_BUDGET
}
VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
{
return hPool->m_BlockVector.CheckCorruption();
}
VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
{
VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
// Process default pools.
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
if(pBlockVector != VMA_NULL)
{
VkResult localRes = pBlockVector->CheckCorruption();
switch(localRes)
{
case VK_ERROR_FEATURE_NOT_PRESENT:
break;
case VK_SUCCESS:
finalRes = VK_SUCCESS;
break;
default:
return localRes;
}
}
}
// Process custom pools.
{
VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
{
if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
{
VkResult localRes = pool->m_BlockVector.CheckCorruption();
switch(localRes)
{
case VK_ERROR_FEATURE_NOT_PRESENT:
break;
case VK_SUCCESS:
finalRes = VK_SUCCESS;
break;
default:
return localRes;
}
}
}
}
return finalRes;
}
VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
{
AtomicTransactionalIncrement<VMA_ATOMIC_UINT32> deviceMemoryCountIncrement;
const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(&m_DeviceMemoryCount);
#if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
{
return VK_ERROR_TOO_MANY_OBJECTS;
}
#endif
const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
// HeapSizeLimit is in effect for this heap.
if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
{
const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
for(;;)
{
const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
if(blockBytesAfterAllocation > heapSize)
{
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
}
if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(blockBytes, blockBytesAfterAllocation))
{
break;
}
}
}
else
{
m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
}
++m_Budget.m_BlockCount[heapIndex];
// VULKAN CALL vkAllocateMemory.
VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
if(res == VK_SUCCESS)
{
#if VMA_MEMORY_BUDGET
++m_Budget.m_OperationsSinceBudgetFetch;
#endif
// Informative callback.
if(m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
{
(*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.pUserData);
}
deviceMemoryCountIncrement.Commit();
}
else
{
--m_Budget.m_BlockCount[heapIndex];
m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
}
return res;
}
void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
{
// Informative callback.
if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
{
(*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.pUserData);
}
// VULKAN CALL vkFreeMemory.
(*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
--m_Budget.m_BlockCount[heapIndex];
m_Budget.m_BlockBytes[heapIndex] -= size;
--m_DeviceMemoryCount;
}
VkResult VmaAllocator_T::BindVulkanBuffer(
VkDeviceMemory memory,
VkDeviceSize memoryOffset,
VkBuffer buffer,
const void* pNext)
{
if(pNext != VMA_NULL)
{
#if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
{
VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
bindBufferMemoryInfo.pNext = pNext;
bindBufferMemoryInfo.buffer = buffer;
bindBufferMemoryInfo.memory = memory;
bindBufferMemoryInfo.memoryOffset = memoryOffset;
return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
}
else
#endif // #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
{
return VK_ERROR_EXTENSION_NOT_PRESENT;
}
}
else
{
return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
}
}
VkResult VmaAllocator_T::BindVulkanImage(
VkDeviceMemory memory,
VkDeviceSize memoryOffset,
VkImage image,
const void* pNext)
{
if(pNext != VMA_NULL)
{
#if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
{
VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
bindBufferMemoryInfo.pNext = pNext;
bindBufferMemoryInfo.image = image;
bindBufferMemoryInfo.memory = memory;
bindBufferMemoryInfo.memoryOffset = memoryOffset;
return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
}
else
#endif // #if VMA_BIND_MEMORY2
{
return VK_ERROR_EXTENSION_NOT_PRESENT;
}
}
else
{
return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
}
}
VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
{
switch(hAllocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
char *pBytes = VMA_NULL;
VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
if(res == VK_SUCCESS)
{
*ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
hAllocation->BlockAllocMap();
}
return res;
}
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
return hAllocation->DedicatedAllocMap(this, ppData);
default:
VMA_ASSERT(0);
return VK_ERROR_MEMORY_MAP_FAILED;
}
}
void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
{
switch(hAllocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
hAllocation->BlockAllocUnmap();
pBlock->Unmap(this, 1);
}
break;
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
hAllocation->DedicatedAllocUnmap(this);
break;
default:
VMA_ASSERT(0);
}
}
VkResult VmaAllocator_T::BindBufferMemory(
VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkBuffer hBuffer,
const void* pNext)
{
VkResult res = VK_ERROR_UNKNOWN;
switch(hAllocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
res = BindVulkanBuffer(hAllocation->GetMemory(), allocationLocalOffset, hBuffer, pNext);
break;
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block.");
res = pBlock->BindBufferMemory(this, hAllocation, allocationLocalOffset, hBuffer, pNext);
break;
}
default:
VMA_ASSERT(0);
}
return res;
}
VkResult VmaAllocator_T::BindImageMemory(
VmaAllocation hAllocation,
VkDeviceSize allocationLocalOffset,
VkImage hImage,
const void* pNext)
{
VkResult res = VK_ERROR_UNKNOWN;
switch(hAllocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
res = BindVulkanImage(hAllocation->GetMemory(), allocationLocalOffset, hImage, pNext);
break;
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block.");
res = pBlock->BindImageMemory(this, hAllocation, allocationLocalOffset, hImage, pNext);
break;
}
default:
VMA_ASSERT(0);
}
return res;
}
VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
VmaAllocation hAllocation,
VkDeviceSize offset, VkDeviceSize size,
VMA_CACHE_OPERATION op)
{
VkResult res = VK_SUCCESS;
VkMappedMemoryRange memRange = {};
if(GetFlushOrInvalidateRange(hAllocation, offset, size, memRange))
{
switch(op)
{
case VMA_CACHE_FLUSH:
res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
break;
case VMA_CACHE_INVALIDATE:
res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
break;
default:
VMA_ASSERT(0);
}
}
// else: Just ignore this call.
return res;
}
VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
uint32_t allocationCount,
const VmaAllocation* allocations,
const VkDeviceSize* offsets, const VkDeviceSize* sizes,
VMA_CACHE_OPERATION op)
{
typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
{
const VmaAllocation alloc = allocations[allocIndex];
const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
VkMappedMemoryRange newRange;
if(GetFlushOrInvalidateRange(alloc, offset, size, newRange))
{
ranges.push_back(newRange);
}
}
VkResult res = VK_SUCCESS;
if(!ranges.empty())
{
switch(op)
{
case VMA_CACHE_FLUSH:
res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
break;
case VMA_CACHE_INVALIDATE:
res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
break;
default:
VMA_ASSERT(0);
}
}
// else: Just ignore this call.
return res;
}
void VmaAllocator_T::FreeDedicatedMemory(const VmaAllocation allocation)
{
VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
VmaPool parentPool = allocation->GetParentPool();
if(parentPool == VK_NULL_HANDLE)
{
// Default pool
m_DedicatedAllocations[memTypeIndex].Unregister(allocation);
}
else
{
// Custom pool
parentPool->m_DedicatedAllocations.Unregister(allocation);
}
VkDeviceMemory hMemory = allocation->GetMemory();
/*
There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
before vkFreeMemory.
if(allocation->GetMappedData() != VMA_NULL)
{
(*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
}
*/
FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(allocation->GetMemoryTypeIndex()), allocation->GetSize());
m_AllocationObjectAllocator.Free(allocation);
VMA_DEBUG_LOG_FORMAT(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
}
uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits() const
{
VkBufferCreateInfo dummyBufCreateInfo;
VmaFillGpuDefragmentationBufferCreateInfo(dummyBufCreateInfo);
uint32_t memoryTypeBits = 0;
// Create buffer.
VkBuffer buf = VK_NULL_HANDLE;
VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
if(res == VK_SUCCESS)
{
// Query for supported memory types.
VkMemoryRequirements memReq;
(*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
memoryTypeBits = memReq.memoryTypeBits;
// Destroy buffer.
(*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
}
return memoryTypeBits;
}
uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits() const
{
// Make sure memory information is already fetched.
VMA_ASSERT(GetMemoryTypeCount() > 0);
uint32_t memoryTypeBits = UINT32_MAX;
if(!m_UseAmdDeviceCoherentMemory)
{
// Exclude memory types that have VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD.
for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
{
memoryTypeBits &= ~(1u << memTypeIndex);
}
}
}
return memoryTypeBits;
}
bool VmaAllocator_T::GetFlushOrInvalidateRange(
VmaAllocation allocation,
VkDeviceSize offset, VkDeviceSize size,
VkMappedMemoryRange& outRange) const
{
const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
{
const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
const VkDeviceSize allocationSize = allocation->GetSize();
VMA_ASSERT(offset <= allocationSize);
outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
outRange.pNext = VMA_NULL;
outRange.memory = allocation->GetMemory();
switch(allocation->GetType())
{
case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
if(size == VK_WHOLE_SIZE)
{
outRange.size = allocationSize - outRange.offset;
}
else
{
VMA_ASSERT(offset + size <= allocationSize);
outRange.size = VMA_MIN(
VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
allocationSize - outRange.offset);
}
break;
case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
{
// 1. Still within this allocation.
outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
if(size == VK_WHOLE_SIZE)
{
size = allocationSize - offset;
}
else
{
VMA_ASSERT(offset + size <= allocationSize);
}
outRange.size = VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize);
// 2. Adjust to whole block.
const VkDeviceSize allocationOffset = allocation->GetOffset();
VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
outRange.offset += allocationOffset;
outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
break;
}
default:
VMA_ASSERT(0);
}
return true;
}
return false;
}
#if VMA_MEMORY_BUDGET
void VmaAllocator_T::UpdateVulkanBudget()
{
VMA_ASSERT(m_UseExtMemoryBudget);
VkPhysicalDeviceMemoryProperties2KHR memProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
VmaPnextChainPushFront(&memProps, &budgetProps);
GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
{
VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
{
m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
// Some bugged drivers return the budget incorrectly, e.g. 0 or much bigger than heap size.
if(m_Budget.m_VulkanBudget[heapIndex] == 0)
{
m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
}
else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
{
m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
}
if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
{
m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
}
}
m_Budget.m_OperationsSinceBudgetFetch = 0;
}
}
#endif // VMA_MEMORY_BUDGET
void VmaAllocator_T::FillAllocation(const VmaAllocation hAllocation, uint8_t pattern)
{
if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
hAllocation->IsMappingAllowed() &&
(m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
{
void* pData = VMA_NULL;
VkResult res = Map(hAllocation, &pData);
if(res == VK_SUCCESS)
{
memset(pData, (int)pattern, (size_t)hAllocation->GetSize());
FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
Unmap(hAllocation);
}
else
{
VMA_ASSERT(0 && "VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
}
}
}
uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
{
uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
if(memoryTypeBits == UINT32_MAX)
{
memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
m_GpuDefragmentationMemoryTypeBits.store(memoryTypeBits);
}
return memoryTypeBits;
}
#if VMA_STATS_STRING_ENABLED
void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
{
json.WriteString("DefaultPools");
json.BeginObject();
{
for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
VmaBlockVector* pBlockVector = m_pBlockVectors[memTypeIndex];
VmaDedicatedAllocationList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
if (pBlockVector != VMA_NULL)
{
json.BeginString("Type ");
json.ContinueString(memTypeIndex);
json.EndString();
json.BeginObject();
{
json.WriteString("PreferredBlockSize");
json.WriteNumber(pBlockVector->GetPreferredBlockSize());
json.WriteString("Blocks");
pBlockVector->PrintDetailedMap(json);
json.WriteString("DedicatedAllocations");
dedicatedAllocList.BuildStatsString(json);
}
json.EndObject();
}
}
}
json.EndObject();
json.WriteString("CustomPools");
json.BeginObject();
{
VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
if (!m_Pools.IsEmpty())
{
for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
{
bool displayType = true;
size_t index = 0;
for (VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
{
VmaBlockVector& blockVector = pool->m_BlockVector;
if (blockVector.GetMemoryTypeIndex() == memTypeIndex)
{
if (displayType)
{
json.BeginString("Type ");
json.ContinueString(memTypeIndex);
json.EndString();
json.BeginArray();
displayType = false;
}
json.BeginObject();
{
json.WriteString("Name");
json.BeginString();
json.ContinueString((uint64_t)index++);
if (pool->GetName())
{
json.ContinueString(" - ");
json.ContinueString(pool->GetName());
}
json.EndString();
json.WriteString("PreferredBlockSize");
json.WriteNumber(blockVector.GetPreferredBlockSize());
json.WriteString("Blocks");
blockVector.PrintDetailedMap(json);
json.WriteString("DedicatedAllocations");
pool->m_DedicatedAllocations.BuildStatsString(json);
}
json.EndObject();
}
}
if (!displayType)
json.EndArray();
}
}
}
json.EndObject();
}
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_ALLOCATOR_T_FUNCTIONS
#ifndef _VMA_PUBLIC_INTERFACE
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
const VmaAllocatorCreateInfo* pCreateInfo,
VmaAllocator* pAllocator)
{
VMA_ASSERT(pCreateInfo && pAllocator);
VMA_ASSERT(pCreateInfo->vulkanApiVersion == 0 ||
(VK_VERSION_MAJOR(pCreateInfo->vulkanApiVersion) == 1 && VK_VERSION_MINOR(pCreateInfo->vulkanApiVersion) <= 3));
VMA_DEBUG_LOG("vmaCreateAllocator");
*pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
VkResult result = (*pAllocator)->Init(pCreateInfo);
if(result < 0)
{
vma_delete(pCreateInfo->pAllocationCallbacks, *pAllocator);
*pAllocator = VK_NULL_HANDLE;
}
return result;
}
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
VmaAllocator allocator)
{
if(allocator != VK_NULL_HANDLE)
{
VMA_DEBUG_LOG("vmaDestroyAllocator");
VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
vma_delete(&allocationCallbacks, allocator);
}
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(VmaAllocator allocator, VmaAllocatorInfo* pAllocatorInfo)
{
VMA_ASSERT(allocator && pAllocatorInfo);
pAllocatorInfo->instance = allocator->m_hInstance;
pAllocatorInfo->physicalDevice = allocator->GetPhysicalDevice();
pAllocatorInfo->device = allocator->m_hDevice;
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
VmaAllocator allocator,
const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
{
VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
*ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
VmaAllocator allocator,
const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
{
VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
*ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
VmaAllocator allocator,
uint32_t memoryTypeIndex,
VkMemoryPropertyFlags* pFlags)
{
VMA_ASSERT(allocator && pFlags);
VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
*pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
}
VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
VmaAllocator allocator,
uint32_t frameIndex)
{
VMA_ASSERT(allocator);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->SetCurrentFrameIndex(frameIndex);
}
VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
VmaAllocator allocator,
VmaTotalStatistics* pStats)
{
VMA_ASSERT(allocator && pStats);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->CalculateStatistics(pStats);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
VmaAllocator allocator,
VmaBudget* pBudgets)
{
VMA_ASSERT(allocator && pBudgets);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->GetHeapBudgets(pBudgets, 0, allocator->GetMemoryHeapCount());
}
#if VMA_STATS_STRING_ENABLED
VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
VmaAllocator allocator,
char** ppStatsString,
VkBool32 detailedMap)
{
VMA_ASSERT(allocator && ppStatsString);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VmaStringBuilder sb(allocator->GetAllocationCallbacks());
{
VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
allocator->GetHeapBudgets(budgets, 0, allocator->GetMemoryHeapCount());
VmaTotalStatistics stats;
allocator->CalculateStatistics(&stats);
VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
json.BeginObject();
{
json.WriteString("General");
json.BeginObject();
{
const VkPhysicalDeviceProperties& deviceProperties = allocator->m_PhysicalDeviceProperties;
const VkPhysicalDeviceMemoryProperties& memoryProperties = allocator->m_MemProps;
json.WriteString("API");
json.WriteString("Vulkan");
json.WriteString("apiVersion");
json.BeginString();
json.ContinueString(VK_VERSION_MAJOR(deviceProperties.apiVersion));
json.ContinueString(".");
json.ContinueString(VK_VERSION_MINOR(deviceProperties.apiVersion));
json.ContinueString(".");
json.ContinueString(VK_VERSION_PATCH(deviceProperties.apiVersion));
json.EndString();
json.WriteString("GPU");
json.WriteString(deviceProperties.deviceName);
json.WriteString("deviceType");
json.WriteNumber(static_cast<uint32_t>(deviceProperties.deviceType));
json.WriteString("maxMemoryAllocationCount");
json.WriteNumber(deviceProperties.limits.maxMemoryAllocationCount);
json.WriteString("bufferImageGranularity");
json.WriteNumber(deviceProperties.limits.bufferImageGranularity);
json.WriteString("nonCoherentAtomSize");
json.WriteNumber(deviceProperties.limits.nonCoherentAtomSize);
json.WriteString("memoryHeapCount");
json.WriteNumber(memoryProperties.memoryHeapCount);
json.WriteString("memoryTypeCount");
json.WriteNumber(memoryProperties.memoryTypeCount);
}
json.EndObject();
}
{
json.WriteString("Total");
VmaPrintDetailedStatistics(json, stats.total);
}
{
json.WriteString("MemoryInfo");
json.BeginObject();
{
for (uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
{
json.BeginString("Heap ");
json.ContinueString(heapIndex);
json.EndString();
json.BeginObject();
{
const VkMemoryHeap& heapInfo = allocator->m_MemProps.memoryHeaps[heapIndex];
json.WriteString("Flags");
json.BeginArray(true);
{
if (heapInfo.flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT)
json.WriteString("DEVICE_LOCAL");
#if VMA_VULKAN_VERSION >= 1001000
if (heapInfo.flags & VK_MEMORY_HEAP_MULTI_INSTANCE_BIT)
json.WriteString("MULTI_INSTANCE");
#endif
VkMemoryHeapFlags flags = heapInfo.flags &
~(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT
#if VMA_VULKAN_VERSION >= 1001000
| VK_MEMORY_HEAP_MULTI_INSTANCE_BIT
#endif
);
if (flags != 0)
json.WriteNumber(flags);
}
json.EndArray();
json.WriteString("Size");
json.WriteNumber(heapInfo.size);
json.WriteString("Budget");
json.BeginObject();
{
json.WriteString("BudgetBytes");
json.WriteNumber(budgets[heapIndex].budget);
json.WriteString("UsageBytes");
json.WriteNumber(budgets[heapIndex].usage);
}
json.EndObject();
json.WriteString("Stats");
VmaPrintDetailedStatistics(json, stats.memoryHeap[heapIndex]);
json.WriteString("MemoryPools");
json.BeginObject();
{
for (uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
{
if (allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
{
json.BeginString("Type ");
json.ContinueString(typeIndex);
json.EndString();
json.BeginObject();
{
json.WriteString("Flags");
json.BeginArray(true);
{
VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
if (flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT)
json.WriteString("DEVICE_LOCAL");
if (flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
json.WriteString("HOST_VISIBLE");
if (flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)
json.WriteString("HOST_COHERENT");
if (flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT)
json.WriteString("HOST_CACHED");
if (flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
json.WriteString("LAZILY_ALLOCATED");
#if VMA_VULKAN_VERSION >= 1001000
if (flags & VK_MEMORY_PROPERTY_PROTECTED_BIT)
json.WriteString("PROTECTED");
#endif
#if VK_AMD_device_coherent_memory
if (flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY)
json.WriteString("DEVICE_COHERENT_AMD");
if (flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)
json.WriteString("DEVICE_UNCACHED_AMD");
#endif
flags &= ~(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
#if VMA_VULKAN_VERSION >= 1001000
| VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT
#endif
#if VK_AMD_device_coherent_memory
| VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY
| VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY
#endif
| VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
| VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
| VK_MEMORY_PROPERTY_HOST_CACHED_BIT);
if (flags != 0)
json.WriteNumber(flags);
}
json.EndArray();
json.WriteString("Stats");
VmaPrintDetailedStatistics(json, stats.memoryType[typeIndex]);
}
json.EndObject();
}
}
}
json.EndObject();
}
json.EndObject();
}
}
json.EndObject();
}
if (detailedMap == VK_TRUE)
allocator->PrintDetailedMap(json);
json.EndObject();
}
*ppStatsString = VmaCreateStringCopy(allocator->GetAllocationCallbacks(), sb.GetData(), sb.GetLength());
}
VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
VmaAllocator allocator,
char* pStatsString)
{
if(pStatsString != VMA_NULL)
{
VMA_ASSERT(allocator);
VmaFreeString(allocator->GetAllocationCallbacks(), pStatsString);
}
}
#endif // VMA_STATS_STRING_ENABLED
/*
This function is not protected by any mutex because it just reads immutable data.
*/
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
VmaAllocator allocator,
uint32_t memoryTypeBits,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
uint32_t* pMemoryTypeIndex)
{
VMA_ASSERT(allocator != VK_NULL_HANDLE);
VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
return allocator->FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo, UINT32_MAX, pMemoryTypeIndex);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
VmaAllocator allocator,
const VkBufferCreateInfo* pBufferCreateInfo,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
uint32_t* pMemoryTypeIndex)
{
VMA_ASSERT(allocator != VK_NULL_HANDLE);
VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
const VkDevice hDev = allocator->m_hDevice;
const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
VkResult res;
#if VMA_VULKAN_VERSION >= 1003000
if(funcs->vkGetDeviceBufferMemoryRequirements)
{
// Can query straight from VkBufferCreateInfo :)
VkDeviceBufferMemoryRequirements devBufMemReq = {VK_STRUCTURE_TYPE_DEVICE_BUFFER_MEMORY_REQUIREMENTS};
devBufMemReq.pCreateInfo = pBufferCreateInfo;
VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
(*funcs->vkGetDeviceBufferMemoryRequirements)(hDev, &devBufMemReq, &memReq);
res = allocator->FindMemoryTypeIndex(
memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo, pBufferCreateInfo->usage, pMemoryTypeIndex);
}
else
#endif // #if VMA_VULKAN_VERSION >= 1003000
{
// Must create a dummy buffer to query :(
VkBuffer hBuffer = VK_NULL_HANDLE;
res = funcs->vkCreateBuffer(
hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
if(res == VK_SUCCESS)
{
VkMemoryRequirements memReq = {};
funcs->vkGetBufferMemoryRequirements(hDev, hBuffer, &memReq);
res = allocator->FindMemoryTypeIndex(
memReq.memoryTypeBits, pAllocationCreateInfo, pBufferCreateInfo->usage, pMemoryTypeIndex);
funcs->vkDestroyBuffer(
hDev, hBuffer, allocator->GetAllocationCallbacks());
}
}
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
VmaAllocator allocator,
const VkImageCreateInfo* pImageCreateInfo,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
uint32_t* pMemoryTypeIndex)
{
VMA_ASSERT(allocator != VK_NULL_HANDLE);
VMA_ASSERT(pImageCreateInfo != VMA_NULL);
VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
const VkDevice hDev = allocator->m_hDevice;
const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
VkResult res;
#if VMA_VULKAN_VERSION >= 1003000
if(funcs->vkGetDeviceImageMemoryRequirements)
{
// Can query straight from VkImageCreateInfo :)
VkDeviceImageMemoryRequirements devImgMemReq = {VK_STRUCTURE_TYPE_DEVICE_IMAGE_MEMORY_REQUIREMENTS};
devImgMemReq.pCreateInfo = pImageCreateInfo;
VMA_ASSERT(pImageCreateInfo->tiling != VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY && (pImageCreateInfo->flags & VK_IMAGE_CREATE_DISJOINT_BIT_COPY) == 0 &&
"Cannot use this VkImageCreateInfo with vmaFindMemoryTypeIndexForImageInfo as I don't know what to pass as VkDeviceImageMemoryRequirements::planeAspect.");
VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
(*funcs->vkGetDeviceImageMemoryRequirements)(hDev, &devImgMemReq, &memReq);
res = allocator->FindMemoryTypeIndex(
memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo, pImageCreateInfo->usage, pMemoryTypeIndex);
}
else
#endif // #if VMA_VULKAN_VERSION >= 1003000
{
// Must create a dummy image to query :(
VkImage hImage = VK_NULL_HANDLE;
res = funcs->vkCreateImage(
hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
if(res == VK_SUCCESS)
{
VkMemoryRequirements memReq = {};
funcs->vkGetImageMemoryRequirements(hDev, hImage, &memReq);
res = allocator->FindMemoryTypeIndex(
memReq.memoryTypeBits, pAllocationCreateInfo, pImageCreateInfo->usage, pMemoryTypeIndex);
funcs->vkDestroyImage(
hDev, hImage, allocator->GetAllocationCallbacks());
}
}
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
VmaAllocator allocator,
const VmaPoolCreateInfo* pCreateInfo,
VmaPool* pPool)
{
VMA_ASSERT(allocator && pCreateInfo && pPool);
VMA_DEBUG_LOG("vmaCreatePool");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->CreatePool(pCreateInfo, pPool);
}
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
VmaAllocator allocator,
VmaPool pool)
{
VMA_ASSERT(allocator);
if(pool == VK_NULL_HANDLE)
{
return;
}
VMA_DEBUG_LOG("vmaDestroyPool");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->DestroyPool(pool);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
VmaAllocator allocator,
VmaPool pool,
VmaStatistics* pPoolStats)
{
VMA_ASSERT(allocator && pool && pPoolStats);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->GetPoolStatistics(pool, pPoolStats);
}
VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
VmaAllocator allocator,
VmaPool pool,
VmaDetailedStatistics* pPoolStats)
{
VMA_ASSERT(allocator && pool && pPoolStats);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->CalculatePoolStatistics(pool, pPoolStats);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
{
VMA_ASSERT(allocator && pool);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VMA_DEBUG_LOG("vmaCheckPoolCorruption");
return allocator->CheckPoolCorruption(pool);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
VmaAllocator allocator,
VmaPool pool,
const char** ppName)
{
VMA_ASSERT(allocator && pool && ppName);
VMA_DEBUG_LOG("vmaGetPoolName");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
*ppName = pool->GetName();
}
VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
VmaAllocator allocator,
VmaPool pool,
const char* pName)
{
VMA_ASSERT(allocator && pool);
VMA_DEBUG_LOG("vmaSetPoolName");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
pool->SetName(pName);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
VmaAllocator allocator,
const VkMemoryRequirements* pVkMemoryRequirements,
const VmaAllocationCreateInfo* pCreateInfo,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
VMA_DEBUG_LOG("vmaAllocateMemory");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VkResult result = allocator->AllocateMemory(
*pVkMemoryRequirements,
false, // requiresDedicatedAllocation
false, // prefersDedicatedAllocation
VK_NULL_HANDLE, // dedicatedBuffer
VK_NULL_HANDLE, // dedicatedImage
UINT32_MAX, // dedicatedBufferImageUsage
*pCreateInfo,
VMA_SUBALLOCATION_TYPE_UNKNOWN,
1, // allocationCount
pAllocation);
if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return result;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
VmaAllocator allocator,
const VkMemoryRequirements* pVkMemoryRequirements,
const VmaAllocationCreateInfo* pCreateInfo,
size_t allocationCount,
VmaAllocation* pAllocations,
VmaAllocationInfo* pAllocationInfo)
{
if(allocationCount == 0)
{
return VK_SUCCESS;
}
VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
VMA_DEBUG_LOG("vmaAllocateMemoryPages");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VkResult result = allocator->AllocateMemory(
*pVkMemoryRequirements,
false, // requiresDedicatedAllocation
false, // prefersDedicatedAllocation
VK_NULL_HANDLE, // dedicatedBuffer
VK_NULL_HANDLE, // dedicatedImage
UINT32_MAX, // dedicatedBufferImageUsage
*pCreateInfo,
VMA_SUBALLOCATION_TYPE_UNKNOWN,
allocationCount,
pAllocations);
if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
{
for(size_t i = 0; i < allocationCount; ++i)
{
allocator->GetAllocationInfo(pAllocations[i], pAllocationInfo + i);
}
}
return result;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
VmaAllocator allocator,
VkBuffer buffer,
const VmaAllocationCreateInfo* pCreateInfo,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VkMemoryRequirements vkMemReq = {};
bool requiresDedicatedAllocation = false;
bool prefersDedicatedAllocation = false;
allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation);
VkResult result = allocator->AllocateMemory(
vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation,
buffer, // dedicatedBuffer
VK_NULL_HANDLE, // dedicatedImage
UINT32_MAX, // dedicatedBufferImageUsage
*pCreateInfo,
VMA_SUBALLOCATION_TYPE_BUFFER,
1, // allocationCount
pAllocation);
if(pAllocationInfo && result == VK_SUCCESS)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return result;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
VmaAllocator allocator,
VkImage image,
const VmaAllocationCreateInfo* pCreateInfo,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
VkMemoryRequirements vkMemReq = {};
bool requiresDedicatedAllocation = false;
bool prefersDedicatedAllocation = false;
allocator->GetImageMemoryRequirements(image, vkMemReq,
requiresDedicatedAllocation, prefersDedicatedAllocation);
VkResult result = allocator->AllocateMemory(
vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation,
VK_NULL_HANDLE, // dedicatedBuffer
image, // dedicatedImage
UINT32_MAX, // dedicatedBufferImageUsage
*pCreateInfo,
VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
1, // allocationCount
pAllocation);
if(pAllocationInfo && result == VK_SUCCESS)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return result;
}
VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
VmaAllocator allocator,
VmaAllocation allocation)
{
VMA_ASSERT(allocator);
if(allocation == VK_NULL_HANDLE)
{
return;
}
VMA_DEBUG_LOG("vmaFreeMemory");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->FreeMemory(
1, // allocationCount
&allocation);
}
VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
VmaAllocator allocator,
size_t allocationCount,
const VmaAllocation* pAllocations)
{
if(allocationCount == 0)
{
return;
}
VMA_ASSERT(allocator);
VMA_DEBUG_LOG("vmaFreeMemoryPages");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->FreeMemory(allocationCount, pAllocations);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
VmaAllocator allocator,
VmaAllocation allocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && allocation && pAllocationInfo);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->GetAllocationInfo(allocation, pAllocationInfo);
}
VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
VmaAllocator allocator,
VmaAllocation allocation,
void* pUserData)
{
VMA_ASSERT(allocator && allocation);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocation->SetUserData(allocator, pUserData);
}
VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const char* VMA_NULLABLE pName)
{
allocation->SetName(allocator, pName);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkMemoryPropertyFlags* VMA_NOT_NULL pFlags)
{
VMA_ASSERT(allocator && allocation && pFlags);
const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
*pFlags = allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
VmaAllocator allocator,
VmaAllocation allocation,
void** ppData)
{
VMA_ASSERT(allocator && allocation && ppData);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->Map(allocation, ppData);
}
VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
VmaAllocator allocator,
VmaAllocation allocation)
{
VMA_ASSERT(allocator && allocation);
VMA_DEBUG_GLOBAL_MUTEX_LOCK
allocator->Unmap(allocation);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
VmaAllocator allocator,
VmaAllocation allocation,
VkDeviceSize offset,
VkDeviceSize size)
{
VMA_ASSERT(allocator && allocation);
VMA_DEBUG_LOG("vmaFlushAllocation");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
VmaAllocator allocator,
VmaAllocation allocation,
VkDeviceSize offset,
VkDeviceSize size)
{
VMA_ASSERT(allocator && allocation);
VMA_DEBUG_LOG("vmaInvalidateAllocation");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
VmaAllocator allocator,
uint32_t allocationCount,
const VmaAllocation* allocations,
const VkDeviceSize* offsets,
const VkDeviceSize* sizes)
{
VMA_ASSERT(allocator);
if(allocationCount == 0)
{
return VK_SUCCESS;
}
VMA_ASSERT(allocations);
VMA_DEBUG_LOG("vmaFlushAllocations");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_FLUSH);
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
VmaAllocator allocator,
uint32_t allocationCount,
const VmaAllocation* allocations,
const VkDeviceSize* offsets,
const VkDeviceSize* sizes)
{
VMA_ASSERT(allocator);
if(allocationCount == 0)
{
return VK_SUCCESS;
}
VMA_ASSERT(allocations);
VMA_DEBUG_LOG("vmaInvalidateAllocations");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_INVALIDATE);
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
VmaAllocator allocator,
uint32_t memoryTypeBits)
{
VMA_ASSERT(allocator);
VMA_DEBUG_LOG("vmaCheckCorruption");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->CheckCorruption(memoryTypeBits);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
VmaAllocator allocator,
const VmaDefragmentationInfo* pInfo,
VmaDefragmentationContext* pContext)
{
VMA_ASSERT(allocator && pInfo && pContext);
VMA_DEBUG_LOG("vmaBeginDefragmentation");
if (pInfo->pool != VMA_NULL)
{
// Check if run on supported algorithms
if (pInfo->pool->m_BlockVector.GetAlgorithm() & VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
return VK_ERROR_FEATURE_NOT_PRESENT;
}
VMA_DEBUG_GLOBAL_MUTEX_LOCK
*pContext = vma_new(allocator, VmaDefragmentationContext_T)(allocator, *pInfo);
return VK_SUCCESS;
}
VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
VmaAllocator allocator,
VmaDefragmentationContext context,
VmaDefragmentationStats* pStats)
{
VMA_ASSERT(allocator && context);
VMA_DEBUG_LOG("vmaEndDefragmentation");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
if (pStats)
context->GetStats(*pStats);
vma_delete(allocator, context);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
VmaAllocator VMA_NOT_NULL allocator,
VmaDefragmentationContext VMA_NOT_NULL context,
VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
{
VMA_ASSERT(context && pPassInfo);
VMA_DEBUG_LOG("vmaBeginDefragmentationPass");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return context->DefragmentPassBegin(*pPassInfo);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
VmaAllocator VMA_NOT_NULL allocator,
VmaDefragmentationContext VMA_NOT_NULL context,
VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
{
VMA_ASSERT(context && pPassInfo);
VMA_DEBUG_LOG("vmaEndDefragmentationPass");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return context->DefragmentPassEnd(*pPassInfo);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
VmaAllocator allocator,
VmaAllocation allocation,
VkBuffer buffer)
{
VMA_ASSERT(allocator && allocation && buffer);
VMA_DEBUG_LOG("vmaBindBufferMemory");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->BindBufferMemory(allocation, 0, buffer, VMA_NULL);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
VmaAllocator allocator,
VmaAllocation allocation,
VkDeviceSize allocationLocalOffset,
VkBuffer buffer,
const void* pNext)
{
VMA_ASSERT(allocator && allocation && buffer);
VMA_DEBUG_LOG("vmaBindBufferMemory2");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->BindBufferMemory(allocation, allocationLocalOffset, buffer, pNext);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
VmaAllocator allocator,
VmaAllocation allocation,
VkImage image)
{
VMA_ASSERT(allocator && allocation && image);
VMA_DEBUG_LOG("vmaBindImageMemory");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->BindImageMemory(allocation, 0, image, VMA_NULL);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
VmaAllocator allocator,
VmaAllocation allocation,
VkDeviceSize allocationLocalOffset,
VkImage image,
const void* pNext)
{
VMA_ASSERT(allocator && allocation && image);
VMA_DEBUG_LOG("vmaBindImageMemory2");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
return allocator->BindImageMemory(allocation, allocationLocalOffset, image, pNext);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
VmaAllocator allocator,
const VkBufferCreateInfo* pBufferCreateInfo,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
VkBuffer* pBuffer,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
if(pBufferCreateInfo->size == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
!allocator->m_UseKhrBufferDeviceAddress)
{
VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
return VK_ERROR_INITIALIZATION_FAILED;
}
VMA_DEBUG_LOG("vmaCreateBuffer");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
*pBuffer = VK_NULL_HANDLE;
*pAllocation = VK_NULL_HANDLE;
// 1. Create VkBuffer.
VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
allocator->m_hDevice,
pBufferCreateInfo,
allocator->GetAllocationCallbacks(),
pBuffer);
if(res >= 0)
{
// 2. vkGetBufferMemoryRequirements.
VkMemoryRequirements vkMemReq = {};
bool requiresDedicatedAllocation = false;
bool prefersDedicatedAllocation = false;
allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
requiresDedicatedAllocation, prefersDedicatedAllocation);
// 3. Allocate memory using allocator.
res = allocator->AllocateMemory(
vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation,
*pBuffer, // dedicatedBuffer
VK_NULL_HANDLE, // dedicatedImage
pBufferCreateInfo->usage, // dedicatedBufferImageUsage
*pAllocationCreateInfo,
VMA_SUBALLOCATION_TYPE_BUFFER,
1, // allocationCount
pAllocation);
if(res >= 0)
{
// 3. Bind buffer with memory.
if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
{
res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
}
if(res >= 0)
{
// All steps succeeded.
#if VMA_STATS_STRING_ENABLED
(*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
#endif
if(pAllocationInfo != VMA_NULL)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return VK_SUCCESS;
}
allocator->FreeMemory(
1, // allocationCount
pAllocation);
*pAllocation = VK_NULL_HANDLE;
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
*pBuffer = VK_NULL_HANDLE;
return res;
}
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
*pBuffer = VK_NULL_HANDLE;
return res;
}
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
VmaAllocator allocator,
const VkBufferCreateInfo* pBufferCreateInfo,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
VkDeviceSize minAlignment,
VkBuffer* pBuffer,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
if(pBufferCreateInfo->size == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
!allocator->m_UseKhrBufferDeviceAddress)
{
VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
return VK_ERROR_INITIALIZATION_FAILED;
}
VMA_DEBUG_LOG("vmaCreateBufferWithAlignment");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
*pBuffer = VK_NULL_HANDLE;
*pAllocation = VK_NULL_HANDLE;
// 1. Create VkBuffer.
VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
allocator->m_hDevice,
pBufferCreateInfo,
allocator->GetAllocationCallbacks(),
pBuffer);
if(res >= 0)
{
// 2. vkGetBufferMemoryRequirements.
VkMemoryRequirements vkMemReq = {};
bool requiresDedicatedAllocation = false;
bool prefersDedicatedAllocation = false;
allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
requiresDedicatedAllocation, prefersDedicatedAllocation);
// 2a. Include minAlignment
vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
// 3. Allocate memory using allocator.
res = allocator->AllocateMemory(
vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation,
*pBuffer, // dedicatedBuffer
VK_NULL_HANDLE, // dedicatedImage
pBufferCreateInfo->usage, // dedicatedBufferImageUsage
*pAllocationCreateInfo,
VMA_SUBALLOCATION_TYPE_BUFFER,
1, // allocationCount
pAllocation);
if(res >= 0)
{
// 3. Bind buffer with memory.
if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
{
res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
}
if(res >= 0)
{
// All steps succeeded.
#if VMA_STATS_STRING_ENABLED
(*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
#endif
if(pAllocationInfo != VMA_NULL)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return VK_SUCCESS;
}
allocator->FreeMemory(
1, // allocationCount
pAllocation);
*pAllocation = VK_NULL_HANDLE;
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
*pBuffer = VK_NULL_HANDLE;
return res;
}
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
*pBuffer = VK_NULL_HANDLE;
return res;
}
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
{
return vmaCreateAliasingBuffer2(allocator, allocation, 0, pBufferCreateInfo, pBuffer);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
{
VMA_ASSERT(allocator && pBufferCreateInfo && pBuffer && allocation);
VMA_ASSERT(allocationLocalOffset + pBufferCreateInfo->size <= allocation->GetSize());
VMA_DEBUG_LOG("vmaCreateAliasingBuffer2");
*pBuffer = VK_NULL_HANDLE;
if (pBufferCreateInfo->size == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
if ((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
!allocator->m_UseKhrBufferDeviceAddress)
{
VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
return VK_ERROR_INITIALIZATION_FAILED;
}
VMA_DEBUG_GLOBAL_MUTEX_LOCK
// 1. Create VkBuffer.
VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
allocator->m_hDevice,
pBufferCreateInfo,
allocator->GetAllocationCallbacks(),
pBuffer);
if (res >= 0)
{
// 2. Bind buffer with memory.
res = allocator->BindBufferMemory(allocation, allocationLocalOffset, *pBuffer, VMA_NULL);
if (res >= 0)
{
return VK_SUCCESS;
}
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
}
return res;
}
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
VmaAllocator allocator,
VkBuffer buffer,
VmaAllocation allocation)
{
VMA_ASSERT(allocator);
if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
{
return;
}
VMA_DEBUG_LOG("vmaDestroyBuffer");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
if(buffer != VK_NULL_HANDLE)
{
(*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
}
if(allocation != VK_NULL_HANDLE)
{
allocator->FreeMemory(
1, // allocationCount
&allocation);
}
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
VmaAllocator allocator,
const VkImageCreateInfo* pImageCreateInfo,
const VmaAllocationCreateInfo* pAllocationCreateInfo,
VkImage* pImage,
VmaAllocation* pAllocation,
VmaAllocationInfo* pAllocationInfo)
{
VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
if(pImageCreateInfo->extent.width == 0 ||
pImageCreateInfo->extent.height == 0 ||
pImageCreateInfo->extent.depth == 0 ||
pImageCreateInfo->mipLevels == 0 ||
pImageCreateInfo->arrayLayers == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
VMA_DEBUG_LOG("vmaCreateImage");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
*pImage = VK_NULL_HANDLE;
*pAllocation = VK_NULL_HANDLE;
// 1. Create VkImage.
VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
allocator->m_hDevice,
pImageCreateInfo,
allocator->GetAllocationCallbacks(),
pImage);
if(res >= 0)
{
VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
// 2. Allocate memory using allocator.
VkMemoryRequirements vkMemReq = {};
bool requiresDedicatedAllocation = false;
bool prefersDedicatedAllocation = false;
allocator->GetImageMemoryRequirements(*pImage, vkMemReq,
requiresDedicatedAllocation, prefersDedicatedAllocation);
res = allocator->AllocateMemory(
vkMemReq,
requiresDedicatedAllocation,
prefersDedicatedAllocation,
VK_NULL_HANDLE, // dedicatedBuffer
*pImage, // dedicatedImage
pImageCreateInfo->usage, // dedicatedBufferImageUsage
*pAllocationCreateInfo,
suballocType,
1, // allocationCount
pAllocation);
if(res >= 0)
{
// 3. Bind image with memory.
if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
{
res = allocator->BindImageMemory(*pAllocation, 0, *pImage, VMA_NULL);
}
if(res >= 0)
{
// All steps succeeded.
#if VMA_STATS_STRING_ENABLED
(*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
#endif
if(pAllocationInfo != VMA_NULL)
{
allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
}
return VK_SUCCESS;
}
allocator->FreeMemory(
1, // allocationCount
pAllocation);
*pAllocation = VK_NULL_HANDLE;
(*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
*pImage = VK_NULL_HANDLE;
return res;
}
(*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
*pImage = VK_NULL_HANDLE;
return res;
}
return res;
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
{
return vmaCreateAliasingImage2(allocator, allocation, 0, pImageCreateInfo, pImage);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
VmaAllocator VMA_NOT_NULL allocator,
VmaAllocation VMA_NOT_NULL allocation,
VkDeviceSize allocationLocalOffset,
const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
{
VMA_ASSERT(allocator && pImageCreateInfo && pImage && allocation);
*pImage = VK_NULL_HANDLE;
VMA_DEBUG_LOG("vmaCreateImage2");
if (pImageCreateInfo->extent.width == 0 ||
pImageCreateInfo->extent.height == 0 ||
pImageCreateInfo->extent.depth == 0 ||
pImageCreateInfo->mipLevels == 0 ||
pImageCreateInfo->arrayLayers == 0)
{
return VK_ERROR_INITIALIZATION_FAILED;
}
VMA_DEBUG_GLOBAL_MUTEX_LOCK
// 1. Create VkImage.
VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
allocator->m_hDevice,
pImageCreateInfo,
allocator->GetAllocationCallbacks(),
pImage);
if (res >= 0)
{
// 2. Bind image with memory.
res = allocator->BindImageMemory(allocation, allocationLocalOffset, *pImage, VMA_NULL);
if (res >= 0)
{
return VK_SUCCESS;
}
(*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
}
return res;
}
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
VmaAllocator VMA_NOT_NULL allocator,
VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
VmaAllocation VMA_NULLABLE allocation)
{
VMA_ASSERT(allocator);
if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
{
return;
}
VMA_DEBUG_LOG("vmaDestroyImage");
VMA_DEBUG_GLOBAL_MUTEX_LOCK
if(image != VK_NULL_HANDLE)
{
(*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
}
if(allocation != VK_NULL_HANDLE)
{
allocator->FreeMemory(
1, // allocationCount
&allocation);
}
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
VmaVirtualBlock VMA_NULLABLE * VMA_NOT_NULL pVirtualBlock)
{
VMA_ASSERT(pCreateInfo && pVirtualBlock);
VMA_ASSERT(pCreateInfo->size > 0);
VMA_DEBUG_LOG("vmaCreateVirtualBlock");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
*pVirtualBlock = vma_new(pCreateInfo->pAllocationCallbacks, VmaVirtualBlock_T)(*pCreateInfo);
VkResult res = (*pVirtualBlock)->Init();
if(res < 0)
{
vma_delete(pCreateInfo->pAllocationCallbacks, *pVirtualBlock);
*pVirtualBlock = VK_NULL_HANDLE;
}
return res;
}
VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(VmaVirtualBlock VMA_NULLABLE virtualBlock)
{
if(virtualBlock != VK_NULL_HANDLE)
{
VMA_DEBUG_LOG("vmaDestroyVirtualBlock");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
VkAllocationCallbacks allocationCallbacks = virtualBlock->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
vma_delete(&allocationCallbacks, virtualBlock);
}
}
VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
VMA_DEBUG_LOG("vmaIsVirtualBlockEmpty");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
return virtualBlock->IsEmpty() ? VK_TRUE : VK_FALSE;
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pVirtualAllocInfo != VMA_NULL);
VMA_DEBUG_LOG("vmaGetVirtualAllocationInfo");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->GetAllocationInfo(allocation, *pVirtualAllocInfo);
}
VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
VkDeviceSize* VMA_NULLABLE pOffset)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pCreateInfo != VMA_NULL && pAllocation != VMA_NULL);
VMA_DEBUG_LOG("vmaVirtualAllocate");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
return virtualBlock->Allocate(*pCreateInfo, *pAllocation, pOffset);
}
VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(VmaVirtualBlock VMA_NOT_NULL virtualBlock, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation)
{
if(allocation != VK_NULL_HANDLE)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
VMA_DEBUG_LOG("vmaVirtualFree");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->Free(allocation);
}
}
VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
VMA_DEBUG_LOG("vmaClearVirtualBlock");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->Clear();
}
VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, void* VMA_NULLABLE pUserData)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
VMA_DEBUG_LOG("vmaSetVirtualAllocationUserData");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->SetAllocationUserData(allocation, pUserData);
}
VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaStatistics* VMA_NOT_NULL pStats)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
VMA_DEBUG_LOG("vmaGetVirtualBlockStatistics");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->GetStatistics(*pStats);
}
VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
VmaDetailedStatistics* VMA_NOT_NULL pStats)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
VMA_DEBUG_LOG("vmaCalculateVirtualBlockStatistics");
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
virtualBlock->CalculateDetailedStatistics(*pStats);
}
#if VMA_STATS_STRING_ENABLED
VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString, VkBool32 detailedMap)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && ppStatsString != VMA_NULL);
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
const VkAllocationCallbacks* allocationCallbacks = virtualBlock->GetAllocationCallbacks();
VmaStringBuilder sb(allocationCallbacks);
virtualBlock->BuildStatsString(detailedMap != VK_FALSE, sb);
*ppStatsString = VmaCreateStringCopy(allocationCallbacks, sb.GetData(), sb.GetLength());
}
VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
char* VMA_NULLABLE pStatsString)
{
if(pStatsString != VMA_NULL)
{
VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
VMA_DEBUG_GLOBAL_MUTEX_LOCK;
VmaFreeString(virtualBlock->GetAllocationCallbacks(), pStatsString);
}
}
#endif // VMA_STATS_STRING_ENABLED
#endif // _VMA_PUBLIC_INTERFACE
#endif // VMA_IMPLEMENTATION
/**
\page quick_start Quick start
\section quick_start_project_setup Project setup
Vulkan Memory Allocator comes in form of a "stb-style" single header file.
You don't need to build it as a separate library project.
You can add this file directly to your project and submit it to code repository next to your other source files.
"Single header" doesn't mean that everything is contained in C/C++ declarations,
like it tends to be in case of inline functions or C++ templates.
It means that implementation is bundled with interface in a single file and needs to be extracted using preprocessor macro.
If you don't do it properly, you will get linker errors.
To do it properly:
-# Include "vk_mem_alloc.h" file in each CPP file where you want to use the library.
This includes declarations of all members of the library.
-# In exactly one CPP file define following macro before this include.
It enables also internal definitions.
\code
#define VMA_IMPLEMENTATION
#include "vk_mem_alloc.h"
\endcode
It may be a good idea to create dedicated CPP file just for this purpose.
This library includes header `<vulkan/vulkan.h>`, which in turn
includes `<windows.h>` on Windows. If you need some specific macros defined
before including these headers (like `WIN32_LEAN_AND_MEAN` or
`WINVER` for Windows, `VK_USE_PLATFORM_WIN32_KHR` for Vulkan), you must define
them before every `#include` of this library.
This library is written in C++, but has C-compatible interface.
Thus you can include and use vk_mem_alloc.h in C or C++ code, but full
implementation with `VMA_IMPLEMENTATION` macro must be compiled as C++, NOT as C.
Some features of C++14 are used. STL containers, RTTI, or C++ exceptions are not used.
\section quick_start_initialization Initialization
At program startup:
-# Initialize Vulkan to have `VkPhysicalDevice`, `VkDevice` and `VkInstance` object.
-# Fill VmaAllocatorCreateInfo structure and create #VmaAllocator object by
calling vmaCreateAllocator().
Only members `physicalDevice`, `device`, `instance` are required.
However, you should inform the library which Vulkan version do you use by setting
VmaAllocatorCreateInfo::vulkanApiVersion and which extensions did you enable
by setting VmaAllocatorCreateInfo::flags (like #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT for VK_KHR_buffer_device_address).
Otherwise, VMA would use only features of Vulkan 1.0 core with no extensions.
\subsection quick_start_initialization_selecting_vulkan_version Selecting Vulkan version
VMA supports Vulkan version down to 1.0, for backward compatibility.
If you want to use higher version, you need to inform the library about it.
This is a two-step process.
<b>Step 1: Compile time.</b> By default, VMA compiles with code supporting the highest
Vulkan version found in the included `<vulkan/vulkan.h>` that is also supported by the library.
If this is OK, you don't need to do anything.
However, if you want to compile VMA as if only some lower Vulkan version was available,
define macro `VMA_VULKAN_VERSION` before every `#include "vk_mem_alloc.h"`.
It should have decimal numeric value in form of ABBBCCC, where A = major, BBB = minor, CCC = patch Vulkan version.
For example, to compile against Vulkan 1.2:
\code
#define VMA_VULKAN_VERSION 1002000 // Vulkan 1.2
#include "vk_mem_alloc.h"
\endcode
<b>Step 2: Runtime.</b> Even when compiled with higher Vulkan version available,
VMA can use only features of a lower version, which is configurable during creation of the #VmaAllocator object.
By default, only Vulkan 1.0 is used.
To initialize the allocator with support for higher Vulkan version, you need to set member
VmaAllocatorCreateInfo::vulkanApiVersion to an appropriate value, e.g. using constants like `VK_API_VERSION_1_2`.
See code sample below.
\subsection quick_start_initialization_importing_vulkan_functions Importing Vulkan functions
You may need to configure importing Vulkan functions. There are 3 ways to do this:
-# **If you link with Vulkan static library** (e.g. "vulkan-1.lib" on Windows):
- You don't need to do anything.
- VMA will use these, as macro `VMA_STATIC_VULKAN_FUNCTIONS` is defined to 1 by default.
-# **If you want VMA to fetch pointers to Vulkan functions dynamically** using `vkGetInstanceProcAddr`,
`vkGetDeviceProcAddr` (this is the option presented in the example below):
- Define `VMA_STATIC_VULKAN_FUNCTIONS` to 0, `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 1.
- Provide pointers to these two functions via VmaVulkanFunctions::vkGetInstanceProcAddr,
VmaVulkanFunctions::vkGetDeviceProcAddr.
- The library will fetch pointers to all other functions it needs internally.
-# **If you fetch pointers to all Vulkan functions in a custom way**, e.g. using some loader like
[Volk](https://github.com/zeux/volk):
- Define `VMA_STATIC_VULKAN_FUNCTIONS` and `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 0.
- Pass these pointers via structure #VmaVulkanFunctions.
Example for case 2:
\code
#define VMA_STATIC_VULKAN_FUNCTIONS 0
#define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
#include "vk_mem_alloc.h"
...
VmaVulkanFunctions vulkanFunctions = {};
vulkanFunctions.vkGetInstanceProcAddr = &vkGetInstanceProcAddr;
vulkanFunctions.vkGetDeviceProcAddr = &vkGetDeviceProcAddr;
VmaAllocatorCreateInfo allocatorCreateInfo = {};
allocatorCreateInfo.vulkanApiVersion = VK_API_VERSION_1_2;
allocatorCreateInfo.physicalDevice = physicalDevice;
allocatorCreateInfo.device = device;
allocatorCreateInfo.instance = instance;
allocatorCreateInfo.pVulkanFunctions = &vulkanFunctions;
VmaAllocator allocator;
vmaCreateAllocator(&allocatorCreateInfo, &allocator);
\endcode
\section quick_start_resource_allocation Resource allocation
When you want to create a buffer or image:
-# Fill `VkBufferCreateInfo` / `VkImageCreateInfo` structure.
-# Fill VmaAllocationCreateInfo structure.
-# Call vmaCreateBuffer() / vmaCreateImage() to get `VkBuffer`/`VkImage` with memory
already allocated and bound to it, plus #VmaAllocation objects that represents its underlying memory.
\code
VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufferInfo.size = 65536;
bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocInfo = {};
allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
\endcode
Don't forget to destroy your objects when no longer needed:
\code
vmaDestroyBuffer(allocator, buffer, allocation);
vmaDestroyAllocator(allocator);
\endcode
\page choosing_memory_type Choosing memory type
Physical devices in Vulkan support various combinations of memory heaps and
types. Help with choosing correct and optimal memory type for your specific
resource is one of the key features of this library. You can use it by filling
appropriate members of VmaAllocationCreateInfo structure, as described below.
You can also combine multiple methods.
-# If you just want to find memory type index that meets your requirements, you
can use function: vmaFindMemoryTypeIndexForBufferInfo(),
vmaFindMemoryTypeIndexForImageInfo(), vmaFindMemoryTypeIndex().
-# If you want to allocate a region of device memory without association with any
specific image or buffer, you can use function vmaAllocateMemory(). Usage of
this function is not recommended and usually not needed.
vmaAllocateMemoryPages() function is also provided for creating multiple allocations at once,
which may be useful for sparse binding.
-# If you already have a buffer or an image created, you want to allocate memory
for it and then you will bind it yourself, you can use function
vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage().
For binding you should use functions: vmaBindBufferMemory(), vmaBindImageMemory()
or their extended versions: vmaBindBufferMemory2(), vmaBindImageMemory2().
-# **This is the easiest and recommended way to use this library:**
If you want to create a buffer or an image, allocate memory for it and bind
them together, all in one call, you can use function vmaCreateBuffer(),
vmaCreateImage().
When using 3. or 4., the library internally queries Vulkan for memory types
supported for that buffer or image (function `vkGetBufferMemoryRequirements()`)
and uses only one of these types.
If no memory type can be found that meets all the requirements, these functions
return `VK_ERROR_FEATURE_NOT_PRESENT`.
You can leave VmaAllocationCreateInfo structure completely filled with zeros.
It means no requirements are specified for memory type.
It is valid, although not very useful.
\section choosing_memory_type_usage Usage
The easiest way to specify memory requirements is to fill member
VmaAllocationCreateInfo::usage using one of the values of enum #VmaMemoryUsage.
It defines high level, common usage types.
Since version 3 of the library, it is recommended to use #VMA_MEMORY_USAGE_AUTO to let it select best memory type for your resource automatically.
For example, if you want to create a uniform buffer that will be filled using
transfer only once or infrequently and then used for rendering every frame as a uniform buffer, you can
do it using following code. The buffer will most likely end up in a memory type with
`VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT` to be fast to access by the GPU device.
\code
VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufferInfo.size = 65536;
bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocInfo = {};
allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
\endcode
If you have a preference for putting the resource in GPU (device) memory or CPU (host) memory
on systems with discrete graphics card that have the memories separate, you can use
#VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST.
When using `VMA_MEMORY_USAGE_AUTO*` while you want to map the allocated memory,
you also need to specify one of the host access flags:
#VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
This will help the library decide about preferred memory type to ensure it has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
so you can map it.
For example, a staging buffer that will be filled via mapped pointer and then
used as a source of transfer to the buffer described previously can be created like this.
It will likely end up in a memory type that is `HOST_VISIBLE` and `HOST_COHERENT`
but not `HOST_CACHED` (meaning uncached, write-combined) and not `DEVICE_LOCAL` (meaning system RAM).
\code
VkBufferCreateInfo stagingBufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
stagingBufferInfo.size = 65536;
stagingBufferInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo stagingAllocInfo = {};
stagingAllocInfo.usage = VMA_MEMORY_USAGE_AUTO;
stagingAllocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
VkBuffer stagingBuffer;
VmaAllocation stagingAllocation;
vmaCreateBuffer(allocator, &stagingBufferInfo, &stagingAllocInfo, &stagingBuffer, &stagingAllocation, nullptr);
\endcode
For more examples of creating different kinds of resources, see chapter \ref usage_patterns.
Usage values `VMA_MEMORY_USAGE_AUTO*` are legal to use only when the library knows
about the resource being created by having `VkBufferCreateInfo` / `VkImageCreateInfo` passed,
so they work with functions like: vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo() etc.
If you allocate raw memory using function vmaAllocateMemory(), you have to use other means of selecting
memory type, as described below.
\note
Old usage values (`VMA_MEMORY_USAGE_GPU_ONLY`, `VMA_MEMORY_USAGE_CPU_ONLY`,
`VMA_MEMORY_USAGE_CPU_TO_GPU`, `VMA_MEMORY_USAGE_GPU_TO_CPU`, `VMA_MEMORY_USAGE_CPU_COPY`)
are still available and work same way as in previous versions of the library
for backward compatibility, but they are not recommended.
\section choosing_memory_type_required_preferred_flags Required and preferred flags
You can specify more detailed requirements by filling members
VmaAllocationCreateInfo::requiredFlags and VmaAllocationCreateInfo::preferredFlags
with a combination of bits from enum `VkMemoryPropertyFlags`. For example,
if you want to create a buffer that will be persistently mapped on host (so it
must be `HOST_VISIBLE`) and preferably will also be `HOST_COHERENT` and `HOST_CACHED`,
use following code:
\code
VmaAllocationCreateInfo allocInfo = {};
allocInfo.requiredFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
allocInfo.preferredFlags = VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
allocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT | VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
\endcode
A memory type is chosen that has all the required flags and as many preferred
flags set as possible.
Value passed in VmaAllocationCreateInfo::usage is internally converted to a set of required and preferred flags,
plus some extra "magic" (heuristics).
\section choosing_memory_type_explicit_memory_types Explicit memory types
If you inspected memory types available on the physical device and you have
a preference for memory types that you want to use, you can fill member
VmaAllocationCreateInfo::memoryTypeBits. It is a bit mask, where each bit set
means that a memory type with that index is allowed to be used for the
allocation. Special value 0, just like `UINT32_MAX`, means there are no
restrictions to memory type index.
Please note that this member is NOT just a memory type index.
Still you can use it to choose just one, specific memory type.
For example, if you already determined that your buffer should be created in
memory type 2, use following code:
\code
uint32_t memoryTypeIndex = 2;
VmaAllocationCreateInfo allocInfo = {};
allocInfo.memoryTypeBits = 1u << memoryTypeIndex;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
\endcode
\section choosing_memory_type_custom_memory_pools Custom memory pools
If you allocate from custom memory pool, all the ways of specifying memory
requirements described above are not applicable and the aforementioned members
of VmaAllocationCreateInfo structure are ignored. Memory type is selected
explicitly when creating the pool and then used to make all the allocations from
that pool. For further details, see \ref custom_memory_pools.
\section choosing_memory_type_dedicated_allocations Dedicated allocations
Memory for allocations is reserved out of larger block of `VkDeviceMemory`
allocated from Vulkan internally. That is the main feature of this whole library.
You can still request a separate memory block to be created for an allocation,
just like you would do in a trivial solution without using any allocator.
In that case, a buffer or image is always bound to that memory at offset 0.
This is called a "dedicated allocation".
You can explicitly request it by using flag #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
The library can also internally decide to use dedicated allocation in some cases, e.g.:
- When the size of the allocation is large.
- When [VK_KHR_dedicated_allocation](@ref vk_khr_dedicated_allocation) extension is enabled
and it reports that dedicated allocation is required or recommended for the resource.
- When allocation of next big memory block fails due to not enough device memory,
but allocation with the exact requested size succeeds.
\page memory_mapping Memory mapping
To "map memory" in Vulkan means to obtain a CPU pointer to `VkDeviceMemory`,
to be able to read from it or write to it in CPU code.
Mapping is possible only of memory allocated from a memory type that has
`VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
Functions `vkMapMemory()`, `vkUnmapMemory()` are designed for this purpose.
You can use them directly with memory allocated by this library,
but it is not recommended because of following issue:
Mapping the same `VkDeviceMemory` block multiple times is illegal - only one mapping at a time is allowed.
This includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan.
Because of this, Vulkan Memory Allocator provides following facilities:
\note If you want to be able to map an allocation, you need to specify one of the flags
#VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
in VmaAllocationCreateInfo::flags. These flags are required for an allocation to be mappable
when using #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` enum values.
For other usage values they are ignored and every such allocation made in `HOST_VISIBLE` memory type is mappable,
but they can still be used for consistency.
\section memory_mapping_mapping_functions Mapping functions
The library provides following functions for mapping of a specific #VmaAllocation: vmaMapMemory(), vmaUnmapMemory().
They are safer and more convenient to use than standard Vulkan functions.
You can map an allocation multiple times simultaneously - mapping is reference-counted internally.
You can also map different allocations simultaneously regardless of whether they use the same `VkDeviceMemory` block.
The way it is implemented is that the library always maps entire memory block, not just region of the allocation.
For further details, see description of vmaMapMemory() function.
Example:
\code
// Having these objects initialized:
struct ConstantBuffer
{
...
};
ConstantBuffer constantBufferData = ...
VmaAllocator allocator = ...
VkBuffer constantBuffer = ...
VmaAllocation constantBufferAllocation = ...
// You can map and fill your buffer using following code:
void* mappedData;
vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
vmaUnmapMemory(allocator, constantBufferAllocation);
\endcode
When mapping, you may see a warning from Vulkan validation layer similar to this one:
<i>Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.</i>
It happens because the library maps entire `VkDeviceMemory` block, where different
types of images and buffers may end up together, especially on GPUs with unified memory like Intel.
You can safely ignore it if you are sure you access only memory of the intended
object that you wanted to map.
\section memory_mapping_persistently_mapped_memory Persistently mapped memory
Keeping your memory persistently mapped is generally OK in Vulkan.
You don't need to unmap it before using its data on the GPU.
The library provides a special feature designed for that:
Allocations made with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in
VmaAllocationCreateInfo::flags stay mapped all the time,
so you can just access CPU pointer to it any time
without a need to call any "map" or "unmap" function.
Example:
\code
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = sizeof(ConstantBuffer);
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer buf;
VmaAllocation alloc;
VmaAllocationInfo allocInfo;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
// Buffer is already mapped. You can access its memory.
memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
\endcode
\note #VMA_ALLOCATION_CREATE_MAPPED_BIT by itself doesn't guarantee that the allocation will end up
in a mappable memory type.
For this, you need to also specify #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
#VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
#VMA_ALLOCATION_CREATE_MAPPED_BIT only guarantees that if the memory is `HOST_VISIBLE`, the allocation will be mapped on creation.
For an example of how to make use of this fact, see section \ref usage_patterns_advanced_data_uploading.
\section memory_mapping_cache_control Cache flush and invalidate
Memory in Vulkan doesn't need to be unmapped before using it on GPU,
but unless a memory types has `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT` flag set,
you need to manually **invalidate** cache before reading of mapped pointer
and **flush** cache after writing to mapped pointer.
Map/unmap operations don't do that automatically.
Vulkan provides following functions for this purpose `vkFlushMappedMemoryRanges()`,
`vkInvalidateMappedMemoryRanges()`, but this library provides more convenient
functions that refer to given allocation object: vmaFlushAllocation(),
vmaInvalidateAllocation(),
or multiple objects at once: vmaFlushAllocations(), vmaInvalidateAllocations().
Regions of memory specified for flush/invalidate must be aligned to
`VkPhysicalDeviceLimits::nonCoherentAtomSize`. This is automatically ensured by the library.
In any memory type that is `HOST_VISIBLE` but not `HOST_COHERENT`, all allocations
within blocks are aligned to this value, so their offsets are always multiply of
`nonCoherentAtomSize` and two different allocations never share same "line" of this size.
Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA)
currently provide `HOST_COHERENT` flag on all memory types that are
`HOST_VISIBLE`, so on PC you may not need to bother.
\page staying_within_budget Staying within budget
When developing a graphics-intensive game or program, it is important to avoid allocating
more GPU memory than it is physically available. When the memory is over-committed,
various bad things can happen, depending on the specific GPU, graphics driver, and
operating system:
- It may just work without any problems.
- The application may slow down because some memory blocks are moved to system RAM
and the GPU has to access them through PCI Express bus.
- A new allocation may take very long time to complete, even few seconds, and possibly
freeze entire system.
- The new allocation may fail with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
- It may even result in GPU crash (TDR), observed as `VK_ERROR_DEVICE_LOST`
returned somewhere later.
\section staying_within_budget_querying_for_budget Querying for budget
To query for current memory usage and available budget, use function vmaGetHeapBudgets().
Returned structure #VmaBudget contains quantities expressed in bytes, per Vulkan memory heap.
Please note that this function returns different information and works faster than
vmaCalculateStatistics(). vmaGetHeapBudgets() can be called every frame or even before every
allocation, while vmaCalculateStatistics() is intended to be used rarely,
only to obtain statistical information, e.g. for debugging purposes.
It is recommended to use <b>VK_EXT_memory_budget</b> device extension to obtain information
about the budget from Vulkan device. VMA is able to use this extension automatically.
When not enabled, the allocator behaves same way, but then it estimates current usage
and available budget based on its internal information and Vulkan memory heap sizes,
which may be less precise. In order to use this extension:
1. Make sure extensions VK_EXT_memory_budget and VK_KHR_get_physical_device_properties2
required by it are available and enable them. Please note that the first is a device
extension and the second is instance extension!
2. Use flag #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT when creating #VmaAllocator object.
3. Make sure to call vmaSetCurrentFrameIndex() every frame. Budget is queried from
Vulkan inside of it to avoid overhead of querying it with every allocation.
\section staying_within_budget_controlling_memory_usage Controlling memory usage
There are many ways in which you can try to stay within the budget.
First, when making new allocation requires allocating a new memory block, the library
tries not to exceed the budget automatically. If a block with default recommended size
(e.g. 256 MB) would go over budget, a smaller block is allocated, possibly even
dedicated memory for just this resource.
If the size of the requested resource plus current memory usage is more than the
budget, by default the library still tries to create it, leaving it to the Vulkan
implementation whether the allocation succeeds or fails. You can change this behavior
by using #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag. With it, the allocation is
not made if it would exceed the budget or if the budget is already exceeded.
VMA then tries to make the allocation from the next eligible Vulkan memory type.
The all of them fail, the call then fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
Example usage pattern may be to pass the #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag
when creating resources that are not essential for the application (e.g. the texture
of a specific object) and not to pass it when creating critically important resources
(e.g. render targets).
On AMD graphics cards there is a custom vendor extension available: <b>VK_AMD_memory_overallocation_behavior</b>
that allows to control the behavior of the Vulkan implementation in out-of-memory cases -
whether it should fail with an error code or still allow the allocation.
Usage of this extension involves only passing extra structure on Vulkan device creation,
so it is out of scope of this library.
Finally, you can also use #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT flag to make sure
a new allocation is created only when it fits inside one of the existing memory blocks.
If it would require to allocate a new block, if fails instead with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
This also ensures that the function call is very fast because it never goes to Vulkan
to obtain a new block.
\note Creating \ref custom_memory_pools with VmaPoolCreateInfo::minBlockCount
set to more than 0 will currently try to allocate memory blocks without checking whether they
fit within budget.
\page resource_aliasing Resource aliasing (overlap)
New explicit graphics APIs (Vulkan and Direct3D 12), thanks to manual memory
management, give an opportunity to alias (overlap) multiple resources in the
same region of memory - a feature not available in the old APIs (Direct3D 11, OpenGL).
It can be useful to save video memory, but it must be used with caution.
For example, if you know the flow of your whole render frame in advance, you
are going to use some intermediate textures or buffers only during a small range of render passes,
and you know these ranges don't overlap in time, you can bind these resources to
the same place in memory, even if they have completely different parameters (width, height, format etc.).

Such scenario is possible using VMA, but you need to create your images manually.
Then you need to calculate parameters of an allocation to be made using formula:
- allocation size = max(size of each image)
- allocation alignment = max(alignment of each image)
- allocation memoryTypeBits = bitwise AND(memoryTypeBits of each image)
Following example shows two different images bound to the same place in memory,
allocated to fit largest of them.
\code
// A 512x512 texture to be sampled.
VkImageCreateInfo img1CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
img1CreateInfo.imageType = VK_IMAGE_TYPE_2D;
img1CreateInfo.extent.width = 512;
img1CreateInfo.extent.height = 512;
img1CreateInfo.extent.depth = 1;
img1CreateInfo.mipLevels = 10;
img1CreateInfo.arrayLayers = 1;
img1CreateInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
img1CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
img1CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
img1CreateInfo.usage = VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
img1CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
// A full screen texture to be used as color attachment.
VkImageCreateInfo img2CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
img2CreateInfo.imageType = VK_IMAGE_TYPE_2D;
img2CreateInfo.extent.width = 1920;
img2CreateInfo.extent.height = 1080;
img2CreateInfo.extent.depth = 1;
img2CreateInfo.mipLevels = 1;
img2CreateInfo.arrayLayers = 1;
img2CreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
img2CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
img2CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
img2CreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
img2CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
VkImage img1;
res = vkCreateImage(device, &img1CreateInfo, nullptr, &img1);
VkImage img2;
res = vkCreateImage(device, &img2CreateInfo, nullptr, &img2);
VkMemoryRequirements img1MemReq;
vkGetImageMemoryRequirements(device, img1, &img1MemReq);
VkMemoryRequirements img2MemReq;
vkGetImageMemoryRequirements(device, img2, &img2MemReq);
VkMemoryRequirements finalMemReq = {};
finalMemReq.size = std::max(img1MemReq.size, img2MemReq.size);
finalMemReq.alignment = std::max(img1MemReq.alignment, img2MemReq.alignment);
finalMemReq.memoryTypeBits = img1MemReq.memoryTypeBits & img2MemReq.memoryTypeBits;
// Validate if(finalMemReq.memoryTypeBits != 0)
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.preferredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
VmaAllocation alloc;
res = vmaAllocateMemory(allocator, &finalMemReq, &allocCreateInfo, &alloc, nullptr);
res = vmaBindImageMemory(allocator, alloc, img1);
res = vmaBindImageMemory(allocator, alloc, img2);
// You can use img1, img2 here, but not at the same time!
vmaFreeMemory(allocator, alloc);
vkDestroyImage(allocator, img2, nullptr);
vkDestroyImage(allocator, img1, nullptr);
\endcode
VMA also provides convenience functions that create a buffer or image and bind it to memory
represented by an existing #VmaAllocation:
vmaCreateAliasingBuffer(), vmaCreateAliasingBuffer2(),
vmaCreateAliasingImage(), vmaCreateAliasingImage2().
Versions with "2" offer additional parameter `allocationLocalOffset`.
Remember that using resources that alias in memory requires proper synchronization.
You need to issue a memory barrier to make sure commands that use `img1` and `img2`
don't overlap on GPU timeline.
You also need to treat a resource after aliasing as uninitialized - containing garbage data.
For example, if you use `img1` and then want to use `img2`, you need to issue
an image memory barrier for `img2` with `oldLayout` = `VK_IMAGE_LAYOUT_UNDEFINED`.
Additional considerations:
- Vulkan also allows to interpret contents of memory between aliasing resources consistently in some cases.
See chapter 11.8. "Memory Aliasing" of Vulkan specification or `VK_IMAGE_CREATE_ALIAS_BIT` flag.
- You can create more complex layout where different images and buffers are bound
at different offsets inside one large allocation. For example, one can imagine
a big texture used in some render passes, aliasing with a set of many small buffers
used between in some further passes. To bind a resource at non-zero offset in an allocation,
use vmaBindBufferMemory2() / vmaBindImageMemory2().
- Before allocating memory for the resources you want to alias, check `memoryTypeBits`
returned in memory requirements of each resource to make sure the bits overlap.
Some GPUs may expose multiple memory types suitable e.g. only for buffers or
images with `COLOR_ATTACHMENT` usage, so the sets of memory types supported by your
resources may be disjoint. Aliasing them is not possible in that case.
\page custom_memory_pools Custom memory pools
A memory pool contains a number of `VkDeviceMemory` blocks.
The library automatically creates and manages default pool for each memory type available on the device.
Default memory pool automatically grows in size.
Size of allocated blocks is also variable and managed automatically.
You can create custom pool and allocate memory out of it.
It can be useful if you want to:
- Keep certain kind of allocations separate from others.
- Enforce particular, fixed size of Vulkan memory blocks.
- Limit maximum amount of Vulkan memory allocated for that pool.
- Reserve minimum or fixed amount of Vulkan memory always preallocated for that pool.
- Use extra parameters for a set of your allocations that are available in #VmaPoolCreateInfo but not in
#VmaAllocationCreateInfo - e.g., custom minimum alignment, custom `pNext` chain.
- Perform defragmentation on a specific subset of your allocations.
To use custom memory pools:
-# Fill VmaPoolCreateInfo structure.
-# Call vmaCreatePool() to obtain #VmaPool handle.
-# When making an allocation, set VmaAllocationCreateInfo::pool to this handle.
You don't need to specify any other parameters of this structure, like `usage`.
Example:
\code
// Find memoryTypeIndex for the pool.
VkBufferCreateInfo sampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
sampleBufCreateInfo.size = 0x10000; // Doesn't matter.
sampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo sampleAllocCreateInfo = {};
sampleAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
uint32_t memTypeIndex;
VkResult res = vmaFindMemoryTypeIndexForBufferInfo(allocator,
&sampleBufCreateInfo, &sampleAllocCreateInfo, &memTypeIndex);
// Check res...
// Create a pool that can have at most 2 blocks, 128 MiB each.
VmaPoolCreateInfo poolCreateInfo = {};
poolCreateInfo.memoryTypeIndex = memTypeIndex;
poolCreateInfo.blockSize = 128ull * 1024 * 1024;
poolCreateInfo.maxBlockCount = 2;
VmaPool pool;
res = vmaCreatePool(allocator, &poolCreateInfo, &pool);
// Check res...
// Allocate a buffer out of it.
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 1024;
bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.pool = pool;
VkBuffer buf;
VmaAllocation alloc;
res = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
// Check res...
\endcode
You have to free all allocations made from this pool before destroying it.
\code
vmaDestroyBuffer(allocator, buf, alloc);
vmaDestroyPool(allocator, pool);
\endcode
New versions of this library support creating dedicated allocations in custom pools.
It is supported only when VmaPoolCreateInfo::blockSize = 0.
To use this feature, set VmaAllocationCreateInfo::pool to the pointer to your custom pool and
VmaAllocationCreateInfo::flags to #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
\note Excessive use of custom pools is a common mistake when using this library.
Custom pools may be useful for special purposes - when you want to
keep certain type of resources separate e.g. to reserve minimum amount of memory
for them or limit maximum amount of memory they can occupy. For most
resources this is not needed and so it is not recommended to create #VmaPool
objects and allocations out of them. Allocating from the default pool is sufficient.
\section custom_memory_pools_MemTypeIndex Choosing memory type index
When creating a pool, you must explicitly specify memory type index.
To find the one suitable for your buffers or images, you can use helper functions
vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo().
You need to provide structures with example parameters of buffers or images
that you are going to create in that pool.
\code
VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
exampleBufCreateInfo.size = 1024; // Doesn't matter
exampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
uint32_t memTypeIndex;
vmaFindMemoryTypeIndexForBufferInfo(allocator, &exampleBufCreateInfo, &allocCreateInfo, &memTypeIndex);
VmaPoolCreateInfo poolCreateInfo = {};
poolCreateInfo.memoryTypeIndex = memTypeIndex;
// ...
\endcode
When creating buffers/images allocated in that pool, provide following parameters:
- `VkBufferCreateInfo`: Prefer to pass same parameters as above.
Otherwise you risk creating resources in a memory type that is not suitable for them, which may result in undefined behavior.
Using different `VK_BUFFER_USAGE_` flags may work, but you shouldn't create images in a pool intended for buffers
or the other way around.
- VmaAllocationCreateInfo: You don't need to pass same parameters. Fill only `pool` member.
Other members are ignored anyway.
\section linear_algorithm Linear allocation algorithm
Each Vulkan memory block managed by this library has accompanying metadata that
keeps track of used and unused regions. By default, the metadata structure and
algorithm tries to find best place for new allocations among free regions to
optimize memory usage. This way you can allocate and free objects in any order.

Sometimes there is a need to use simpler, linear allocation algorithm. You can
create custom pool that uses such algorithm by adding flag
#VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT to VmaPoolCreateInfo::flags while creating
#VmaPool object. Then an alternative metadata management is used. It always
creates new allocations after last one and doesn't reuse free regions after
allocations freed in the middle. It results in better allocation performance and
less memory consumed by metadata.

With this one flag, you can create a custom pool that can be used in many ways:
free-at-once, stack, double stack, and ring buffer. See below for details.
You don't need to specify explicitly which of these options you are going to use - it is detected automatically.
\subsection linear_algorithm_free_at_once Free-at-once
In a pool that uses linear algorithm, you still need to free all the allocations
individually, e.g. by using vmaFreeMemory() or vmaDestroyBuffer(). You can free
them in any order. New allocations are always made after last one - free space
in the middle is not reused. However, when you release all the allocation and
the pool becomes empty, allocation starts from the beginning again. This way you
can use linear algorithm to speed up creation of allocations that you are going
to release all at once.

This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
value that allows multiple memory blocks.
\subsection linear_algorithm_stack Stack
When you free an allocation that was created last, its space can be reused.
Thanks to this, if you always release allocations in the order opposite to their
creation (LIFO - Last In First Out), you can achieve behavior of a stack.

This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
value that allows multiple memory blocks.
\subsection linear_algorithm_double_stack Double stack
The space reserved by a custom pool with linear algorithm may be used by two
stacks:
- First, default one, growing up from offset 0.
- Second, "upper" one, growing down from the end towards lower offsets.
To make allocation from the upper stack, add flag #VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
to VmaAllocationCreateInfo::flags.

Double stack is available only in pools with one memory block -
VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
When the two stacks' ends meet so there is not enough space between them for a
new allocation, such allocation fails with usual
`VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
\subsection linear_algorithm_ring_buffer Ring buffer
When you free some allocations from the beginning and there is not enough free space
for a new one at the end of a pool, allocator's "cursor" wraps around to the
beginning and starts allocation there. Thanks to this, if you always release
allocations in the same order as you created them (FIFO - First In First Out),
you can achieve behavior of a ring buffer / queue.

Ring buffer is available only in pools with one memory block -
VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
\note \ref defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
\page defragmentation Defragmentation
Interleaved allocations and deallocations of many objects of varying size can
cause fragmentation over time, which can lead to a situation where the library is unable
to find a continuous range of free memory for a new allocation despite there is
enough free space, just scattered across many small free ranges between existing
allocations.
To mitigate this problem, you can use defragmentation feature.
It doesn't happen automatically though and needs your cooperation,
because VMA is a low level library that only allocates memory.
It cannot recreate buffers and images in a new place as it doesn't remember the contents of `VkBufferCreateInfo` / `VkImageCreateInfo` structures.
It cannot copy their contents as it doesn't record any commands to a command buffer.
Example:
\code
VmaDefragmentationInfo defragInfo = {};
defragInfo.pool = myPool;
defragInfo.flags = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT;
VmaDefragmentationContext defragCtx;
VkResult res = vmaBeginDefragmentation(allocator, &defragInfo, &defragCtx);
// Check res...
for(;;)
{
VmaDefragmentationPassMoveInfo pass;
res = vmaBeginDefragmentationPass(allocator, defragCtx, &pass);
if(res == VK_SUCCESS)
break;
else if(res != VK_INCOMPLETE)
// Handle error...
for(uint32_t i = 0; i < pass.moveCount; ++i)
{
// Inspect pass.pMoves[i].srcAllocation, identify what buffer/image it represents.
VmaAllocationInfo allocInfo;
vmaGetAllocationInfo(allocator, pass.pMoves[i].srcAllocation, &allocInfo);
MyEngineResourceData* resData = (MyEngineResourceData*)allocInfo.pUserData;
// Recreate and bind this buffer/image at: pass.pMoves[i].dstMemory, pass.pMoves[i].dstOffset.
VkImageCreateInfo imgCreateInfo = ...
VkImage newImg;
res = vkCreateImage(device, &imgCreateInfo, nullptr, &newImg);
// Check res...
res = vmaBindImageMemory(allocator, pass.pMoves[i].dstTmpAllocation, newImg);
// Check res...
// Issue a vkCmdCopyBuffer/vkCmdCopyImage to copy its content to the new place.
vkCmdCopyImage(cmdBuf, resData->img, ..., newImg, ...);
}
// Make sure the copy commands finished executing.
vkWaitForFences(...);
// Destroy old buffers/images bound with pass.pMoves[i].srcAllocation.
for(uint32_t i = 0; i < pass.moveCount; ++i)
{
// ...
vkDestroyImage(device, resData->img, nullptr);
}
// Update appropriate descriptors to point to the new places...
res = vmaEndDefragmentationPass(allocator, defragCtx, &pass);
if(res == VK_SUCCESS)
break;
else if(res != VK_INCOMPLETE)
// Handle error...
}
vmaEndDefragmentation(allocator, defragCtx, nullptr);
\endcode
Although functions like vmaCreateBuffer(), vmaCreateImage(), vmaDestroyBuffer(), vmaDestroyImage()
create/destroy an allocation and a buffer/image at once, these are just a shortcut for
creating the resource, allocating memory, and binding them together.
Defragmentation works on memory allocations only. You must handle the rest manually.
Defragmentation is an iterative process that should repreat "passes" as long as related functions
return `VK_INCOMPLETE` not `VK_SUCCESS`.
In each pass:
1. vmaBeginDefragmentationPass() function call:
- Calculates and returns the list of allocations to be moved in this pass.
Note this can be a time-consuming process.
- Reserves destination memory for them by creating temporary destination allocations
that you can query for their `VkDeviceMemory` + offset using vmaGetAllocationInfo().
2. Inside the pass, **you should**:
- Inspect the returned list of allocations to be moved.
- Create new buffers/images and bind them at the returned destination temporary allocations.
- Copy data from source to destination resources if necessary.
- Destroy the source buffers/images, but NOT their allocations.
3. vmaEndDefragmentationPass() function call:
- Frees the source memory reserved for the allocations that are moved.
- Modifies source #VmaAllocation objects that are moved to point to the destination reserved memory.
- Frees `VkDeviceMemory` blocks that became empty.
Unlike in previous iterations of the defragmentation API, there is no list of "movable" allocations passed as a parameter.
Defragmentation algorithm tries to move all suitable allocations.
You can, however, refuse to move some of them inside a defragmentation pass, by setting
`pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
This is not recommended and may result in suboptimal packing of the allocations after defragmentation.
If you cannot ensure any allocation can be moved, it is better to keep movable allocations separate in a custom pool.
Inside a pass, for each allocation that should be moved:
- You should copy its data from the source to the destination place by calling e.g. `vkCmdCopyBuffer()`, `vkCmdCopyImage()`.
- You need to make sure these commands finished executing before destroying the source buffers/images and before calling vmaEndDefragmentationPass().
- If a resource doesn't contain any meaningful data, e.g. it is a transient color attachment image to be cleared,
filled, and used temporarily in each rendering frame, you can just recreate this image
without copying its data.
- If the resource is in `HOST_VISIBLE` and `HOST_CACHED` memory, you can copy its data on the CPU
using `memcpy()`.
- If you cannot move the allocation, you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
This will cancel the move.
- vmaEndDefragmentationPass() will then free the destination memory
not the source memory of the allocation, leaving it unchanged.
- If you decide the allocation is unimportant and can be destroyed instead of moved (e.g. it wasn't used for long time),
you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
- vmaEndDefragmentationPass() will then free both source and destination memory, and will destroy the source #VmaAllocation object.
You can defragment a specific custom pool by setting VmaDefragmentationInfo::pool
(like in the example above) or all the default pools by setting this member to null.
Defragmentation is always performed in each pool separately.
Allocations are never moved between different Vulkan memory types.
The size of the destination memory reserved for a moved allocation is the same as the original one.
Alignment of an allocation as it was determined using `vkGetBufferMemoryRequirements()` etc. is also respected after defragmentation.
Buffers/images should be recreated with the same `VkBufferCreateInfo` / `VkImageCreateInfo` parameters as the original ones.
You can perform the defragmentation incrementally to limit the number of allocations and bytes to be moved
in each pass, e.g. to call it in sync with render frames and not to experience too big hitches.
See members: VmaDefragmentationInfo::maxBytesPerPass, VmaDefragmentationInfo::maxAllocationsPerPass.
It is also safe to perform the defragmentation asynchronously to render frames and other Vulkan and VMA
usage, possibly from multiple threads, with the exception that allocations
returned in VmaDefragmentationPassMoveInfo::pMoves shouldn't be destroyed until the defragmentation pass is ended.
<b>Mapping</b> is preserved on allocations that are moved during defragmentation.
Whether through #VMA_ALLOCATION_CREATE_MAPPED_BIT or vmaMapMemory(), the allocations
are mapped at their new place. Of course, pointer to the mapped data changes, so it needs to be queried
using VmaAllocationInfo::pMappedData.
\note Defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
\page statistics Statistics
This library contains several functions that return information about its internal state,
especially the amount of memory allocated from Vulkan.
\section statistics_numeric_statistics Numeric statistics
If you need to obtain basic statistics about memory usage per heap, together with current budget,
you can call function vmaGetHeapBudgets() and inspect structure #VmaBudget.
This is useful to keep track of memory usage and stay within budget
(see also \ref staying_within_budget).
Example:
\code
uint32_t heapIndex = ...
VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
vmaGetHeapBudgets(allocator, budgets);
printf("My heap currently has %u allocations taking %llu B,\n",
budgets[heapIndex].statistics.allocationCount,
budgets[heapIndex].statistics.allocationBytes);
printf("allocated out of %u Vulkan device memory blocks taking %llu B,\n",
budgets[heapIndex].statistics.blockCount,
budgets[heapIndex].statistics.blockBytes);
printf("Vulkan reports total usage %llu B with budget %llu B.\n",
budgets[heapIndex].usage,
budgets[heapIndex].budget);
\endcode
You can query for more detailed statistics per memory heap, type, and totals,
including minimum and maximum allocation size and unused range size,
by calling function vmaCalculateStatistics() and inspecting structure #VmaTotalStatistics.
This function is slower though, as it has to traverse all the internal data structures,
so it should be used only for debugging purposes.
You can query for statistics of a custom pool using function vmaGetPoolStatistics()
or vmaCalculatePoolStatistics().
You can query for information about a specific allocation using function vmaGetAllocationInfo().
It fill structure #VmaAllocationInfo.
\section statistics_json_dump JSON dump
You can dump internal state of the allocator to a string in JSON format using function vmaBuildStatsString().
The result is guaranteed to be correct JSON.
It uses ANSI encoding.
Any strings provided by user (see [Allocation names](@ref allocation_names))
are copied as-is and properly escaped for JSON, so if they use UTF-8, ISO-8859-2 or any other encoding,
this JSON string can be treated as using this encoding.
It must be freed using function vmaFreeStatsString().
The format of this JSON string is not part of official documentation of the library,
but it will not change in backward-incompatible way without increasing library major version number
and appropriate mention in changelog.
The JSON string contains all the data that can be obtained using vmaCalculateStatistics().
It can also contain detailed map of allocated memory blocks and their regions -
free and occupied by allocations.
This allows e.g. to visualize the memory or assess fragmentation.
\page allocation_annotation Allocation names and user data
\section allocation_user_data Allocation user data
You can annotate allocations with your own information, e.g. for debugging purposes.
To do that, fill VmaAllocationCreateInfo::pUserData field when creating
an allocation. It is an opaque `void*` pointer. You can use it e.g. as a pointer,
some handle, index, key, ordinal number or any other value that would associate
the allocation with your custom metadata.
It is useful to identify appropriate data structures in your engine given #VmaAllocation,
e.g. when doing \ref defragmentation.
\code
VkBufferCreateInfo bufCreateInfo = ...
MyBufferMetadata* pMetadata = CreateBufferMetadata();
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.pUserData = pMetadata;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buffer, &allocation, nullptr);
\endcode
The pointer may be later retrieved as VmaAllocationInfo::pUserData:
\code
VmaAllocationInfo allocInfo;
vmaGetAllocationInfo(allocator, allocation, &allocInfo);
MyBufferMetadata* pMetadata = (MyBufferMetadata*)allocInfo.pUserData;
\endcode
It can also be changed using function vmaSetAllocationUserData().
Values of (non-zero) allocations' `pUserData` are printed in JSON report created by
vmaBuildStatsString() in hexadecimal form.
\section allocation_names Allocation names
An allocation can also carry a null-terminated string, giving a name to the allocation.
To set it, call vmaSetAllocationName().
The library creates internal copy of the string, so the pointer you pass doesn't need
to be valid for whole lifetime of the allocation. You can free it after the call.
\code
std::string imageName = "Texture: ";
imageName += fileName;
vmaSetAllocationName(allocator, allocation, imageName.c_str());
\endcode
The string can be later retrieved by inspecting VmaAllocationInfo::pName.
It is also printed in JSON report created by vmaBuildStatsString().
\note Setting string name to VMA allocation doesn't automatically set it to the Vulkan buffer or image created with it.
You must do it manually using an extension like VK_EXT_debug_utils, which is independent of this library.
\page virtual_allocator Virtual allocator
As an extra feature, the core allocation algorithm of the library is exposed through a simple and convenient API of "virtual allocator".
It doesn't allocate any real GPU memory. It just keeps track of used and free regions of a "virtual block".
You can use it to allocate your own memory or other objects, even completely unrelated to Vulkan.
A common use case is sub-allocation of pieces of one large GPU buffer.
\section virtual_allocator_creating_virtual_block Creating virtual block
To use this functionality, there is no main "allocator" object.
You don't need to have #VmaAllocator object created.
All you need to do is to create a separate #VmaVirtualBlock object for each block of memory you want to be managed by the allocator:
-# Fill in #VmaVirtualBlockCreateInfo structure.
-# Call vmaCreateVirtualBlock(). Get new #VmaVirtualBlock object.
Example:
\code
VmaVirtualBlockCreateInfo blockCreateInfo = {};
blockCreateInfo.size = 1048576; // 1 MB
VmaVirtualBlock block;
VkResult res = vmaCreateVirtualBlock(&blockCreateInfo, &block);
\endcode
\section virtual_allocator_making_virtual_allocations Making virtual allocations
#VmaVirtualBlock object contains internal data structure that keeps track of free and occupied regions
using the same code as the main Vulkan memory allocator.
Similarly to #VmaAllocation for standard GPU allocations, there is #VmaVirtualAllocation type
that represents an opaque handle to an allocation within the virtual block.
In order to make such allocation:
-# Fill in #VmaVirtualAllocationCreateInfo structure.
-# Call vmaVirtualAllocate(). Get new #VmaVirtualAllocation object that represents the allocation.
You can also receive `VkDeviceSize offset` that was assigned to the allocation.
Example:
\code
VmaVirtualAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.size = 4096; // 4 KB
VmaVirtualAllocation alloc;
VkDeviceSize offset;
res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, &offset);
if(res == VK_SUCCESS)
{
// Use the 4 KB of your memory starting at offset.
}
else
{
// Allocation failed - no space for it could be found. Handle this error!
}
\endcode
\section virtual_allocator_deallocation Deallocation
When no longer needed, an allocation can be freed by calling vmaVirtualFree().
You can only pass to this function an allocation that was previously returned by vmaVirtualAllocate()
called for the same #VmaVirtualBlock.
When whole block is no longer needed, the block object can be released by calling vmaDestroyVirtualBlock().
All allocations must be freed before the block is destroyed, which is checked internally by an assert.
However, if you don't want to call vmaVirtualFree() for each allocation, you can use vmaClearVirtualBlock() to free them all at once -
a feature not available in normal Vulkan memory allocator. Example:
\code
vmaVirtualFree(block, alloc);
vmaDestroyVirtualBlock(block);
\endcode
\section virtual_allocator_allocation_parameters Allocation parameters
You can attach a custom pointer to each allocation by using vmaSetVirtualAllocationUserData().
Its default value is null.
It can be used to store any data that needs to be associated with that allocation - e.g. an index, a handle, or a pointer to some
larger data structure containing more information. Example:
\code
struct CustomAllocData
{
std::string m_AllocName;
};
CustomAllocData* allocData = new CustomAllocData();
allocData->m_AllocName = "My allocation 1";
vmaSetVirtualAllocationUserData(block, alloc, allocData);
\endcode
The pointer can later be fetched, along with allocation offset and size, by passing the allocation handle to function
vmaGetVirtualAllocationInfo() and inspecting returned structure #VmaVirtualAllocationInfo.
If you allocated a new object to be used as the custom pointer, don't forget to delete that object before freeing the allocation!
Example:
\code
VmaVirtualAllocationInfo allocInfo;
vmaGetVirtualAllocationInfo(block, alloc, &allocInfo);
delete (CustomAllocData*)allocInfo.pUserData;
vmaVirtualFree(block, alloc);
\endcode
\section virtual_allocator_alignment_and_units Alignment and units
It feels natural to express sizes and offsets in bytes.
If an offset of an allocation needs to be aligned to a multiply of some number (e.g. 4 bytes), you can fill optional member
VmaVirtualAllocationCreateInfo::alignment to request it. Example:
\code
VmaVirtualAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.size = 4096; // 4 KB
allocCreateInfo.alignment = 4; // Returned offset must be a multiply of 4 B
VmaVirtualAllocation alloc;
res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, nullptr);
\endcode
Alignments of different allocations made from one block may vary.
However, if all alignments and sizes are always multiply of some size e.g. 4 B or `sizeof(MyDataStruct)`,
you can express all sizes, alignments, and offsets in multiples of that size instead of individual bytes.
It might be more convenient, but you need to make sure to use this new unit consistently in all the places:
- VmaVirtualBlockCreateInfo::size
- VmaVirtualAllocationCreateInfo::size and VmaVirtualAllocationCreateInfo::alignment
- Using offset returned by vmaVirtualAllocate() or in VmaVirtualAllocationInfo::offset
\section virtual_allocator_statistics Statistics
You can obtain statistics of a virtual block using vmaGetVirtualBlockStatistics()
(to get brief statistics that are fast to calculate)
or vmaCalculateVirtualBlockStatistics() (to get more detailed statistics, slower to calculate).
The functions fill structures #VmaStatistics, #VmaDetailedStatistics respectively - same as used by the normal Vulkan memory allocator.
Example:
\code
VmaStatistics stats;
vmaGetVirtualBlockStatistics(block, &stats);
printf("My virtual block has %llu bytes used by %u virtual allocations\n",
stats.allocationBytes, stats.allocationCount);
\endcode
You can also request a full list of allocations and free regions as a string in JSON format by calling
vmaBuildVirtualBlockStatsString().
Returned string must be later freed using vmaFreeVirtualBlockStatsString().
The format of this string differs from the one returned by the main Vulkan allocator, but it is similar.
\section virtual_allocator_additional_considerations Additional considerations
The "virtual allocator" functionality is implemented on a level of individual memory blocks.
Keeping track of a whole collection of blocks, allocating new ones when out of free space,
deleting empty ones, and deciding which one to try first for a new allocation must be implemented by the user.
Alternative allocation algorithms are supported, just like in custom pools of the real GPU memory.
See enum #VmaVirtualBlockCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT).
You can find their description in chapter \ref custom_memory_pools.
Allocation strategies are also supported.
See enum #VmaVirtualAllocationCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT).
Following features are supported only by the allocator of the real GPU memory and not by virtual allocations:
buffer-image granularity, `VMA_DEBUG_MARGIN`, `VMA_MIN_ALIGNMENT`.
\page debugging_memory_usage Debugging incorrect memory usage
If you suspect a bug with memory usage, like usage of uninitialized memory or
memory being overwritten out of bounds of an allocation,
you can use debug features of this library to verify this.
\section debugging_memory_usage_initialization Memory initialization
If you experience a bug with incorrect and nondeterministic data in your program and you suspect uninitialized memory to be used,
you can enable automatic memory initialization to verify this.
To do it, define macro `VMA_DEBUG_INITIALIZE_ALLOCATIONS` to 1.
\code
#define VMA_DEBUG_INITIALIZE_ALLOCATIONS 1
#include "vk_mem_alloc.h"
\endcode
It makes memory of new allocations initialized to bit pattern `0xDCDCDCDC`.
Before an allocation is destroyed, its memory is filled with bit pattern `0xEFEFEFEF`.
Memory is automatically mapped and unmapped if necessary.
If you find these values while debugging your program, good chances are that you incorrectly
read Vulkan memory that is allocated but not initialized, or already freed, respectively.
Memory initialization works only with memory types that are `HOST_VISIBLE` and with allocations that can be mapped.
It works also with dedicated allocations.
\section debugging_memory_usage_margins Margins
By default, allocations are laid out in memory blocks next to each other if possible
(considering required alignment, `bufferImageGranularity`, and `nonCoherentAtomSize`).

Define macro `VMA_DEBUG_MARGIN` to some non-zero value (e.g. 16) to enforce specified
number of bytes as a margin after every allocation.
\code
#define VMA_DEBUG_MARGIN 16
#include "vk_mem_alloc.h"
\endcode

If your bug goes away after enabling margins, it means it may be caused by memory
being overwritten outside of allocation boundaries. It is not 100% certain though.
Change in application behavior may also be caused by different order and distribution
of allocations across memory blocks after margins are applied.
Margins work with all types of memory.
Margin is applied only to allocations made out of memory blocks and not to dedicated
allocations, which have their own memory block of specific size.
It is thus not applied to allocations made using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag
or those automatically decided to put into dedicated allocations, e.g. due to its
large size or recommended by VK_KHR_dedicated_allocation extension.
Margins appear in [JSON dump](@ref statistics_json_dump) as part of free space.
Note that enabling margins increases memory usage and fragmentation.
Margins do not apply to \ref virtual_allocator.
\section debugging_memory_usage_corruption_detection Corruption detection
You can additionally define macro `VMA_DEBUG_DETECT_CORRUPTION` to 1 to enable validation
of contents of the margins.
\code
#define VMA_DEBUG_MARGIN 16
#define VMA_DEBUG_DETECT_CORRUPTION 1
#include "vk_mem_alloc.h"
\endcode
When this feature is enabled, number of bytes specified as `VMA_DEBUG_MARGIN`
(it must be multiply of 4) after every allocation is filled with a magic number.
This idea is also know as "canary".
Memory is automatically mapped and unmapped if necessary.
This number is validated automatically when the allocation is destroyed.
If it is not equal to the expected value, `VMA_ASSERT()` is executed.
It clearly means that either CPU or GPU overwritten the memory outside of boundaries of the allocation,
which indicates a serious bug.
You can also explicitly request checking margins of all allocations in all memory blocks
that belong to specified memory types by using function vmaCheckCorruption(),
or in memory blocks that belong to specified custom pool, by using function
vmaCheckPoolCorruption().
Margin validation (corruption detection) works only for memory types that are
`HOST_VISIBLE` and `HOST_COHERENT`.
\page opengl_interop OpenGL Interop
VMA provides some features that help with interoperability with OpenGL.
\section opengl_interop_exporting_memory Exporting memory
If you want to attach `VkExportMemoryAllocateInfoKHR` structure to `pNext` chain of memory allocations made by the library:
It is recommended to create \ref custom_memory_pools for such allocations.
Define and fill in your `VkExportMemoryAllocateInfoKHR` structure and attach it to VmaPoolCreateInfo::pMemoryAllocateNext
while creating the custom pool.
Please note that the structure must remain alive and unchanged for the whole lifetime of the #VmaPool,
not only while creating it, as no copy of the structure is made,
but its original pointer is used for each allocation instead.
If you want to export all memory allocated by the library from certain memory types,
also dedicated allocations or other allocations made from default pools,
an alternative solution is to fill in VmaAllocatorCreateInfo::pTypeExternalMemoryHandleTypes.
It should point to an array with `VkExternalMemoryHandleTypeFlagsKHR` to be automatically passed by the library
through `VkExportMemoryAllocateInfoKHR` on each allocation made from a specific memory type.
Please note that new versions of the library also support dedicated allocations created in custom pools.
You should not mix these two methods in a way that allows to apply both to the same memory type.
Otherwise, `VkExportMemoryAllocateInfoKHR` structure would be attached twice to the `pNext` chain of `VkMemoryAllocateInfo`.
\section opengl_interop_custom_alignment Custom alignment
Buffers or images exported to a different API like OpenGL may require a different alignment,
higher than the one used by the library automatically, queried from functions like `vkGetBufferMemoryRequirements`.
To impose such alignment:
It is recommended to create \ref custom_memory_pools for such allocations.
Set VmaPoolCreateInfo::minAllocationAlignment member to the minimum alignment required for each allocation
to be made out of this pool.
The alignment actually used will be the maximum of this member and the alignment returned for the specific buffer or image
from a function like `vkGetBufferMemoryRequirements`, which is called by VMA automatically.
If you want to create a buffer with a specific minimum alignment out of default pools,
use special function vmaCreateBufferWithAlignment(), which takes additional parameter `minAlignment`.
Note the problem of alignment affects only resources placed inside bigger `VkDeviceMemory` blocks and not dedicated
allocations, as these, by definition, always have alignment = 0 because the resource is bound to the beginning of its dedicated block.
Contrary to Direct3D 12, Vulkan doesn't have a concept of alignment of the entire memory block passed on its allocation.
\page usage_patterns Recommended usage patterns
Vulkan gives great flexibility in memory allocation.
This chapter shows the most common patterns.
See also slides from talk:
[Sawicki, Adam. Advanced Graphics Techniques Tutorial: Memory management in Vulkan and DX12. Game Developers Conference, 2018](https://www.gdcvault.com/play/1025458/Advanced-Graphics-Techniques-Tutorial-New)
\section usage_patterns_gpu_only GPU-only resource
<b>When:</b>
Any resources that you frequently write and read on GPU,
e.g. images used as color attachments (aka "render targets"), depth-stencil attachments,
images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").
<b>What to do:</b>
Let the library select the optimal memory type, which will likely have `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
\code
VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
imgCreateInfo.extent.width = 3840;
imgCreateInfo.extent.height = 2160;
imgCreateInfo.extent.depth = 1;
imgCreateInfo.mipLevels = 1;
imgCreateInfo.arrayLayers = 1;
imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
allocCreateInfo.priority = 1.0f;
VkImage img;
VmaAllocation alloc;
vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
\endcode
<b>Also consider:</b>
Consider creating them as dedicated allocations using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
especially if they are large or if you plan to destroy and recreate them with different sizes
e.g. when display resolution changes.
Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later.
When VK_EXT_memory_priority extension is enabled, it is also worth setting high priority to such allocation
to decrease chances to be evicted to system memory by the operating system.
\section usage_patterns_staging_copy_upload Staging copy for upload
<b>When:</b>
A "staging" buffer than you want to map and fill from CPU code, then use as a source of transfer
to some GPU resource.
<b>What to do:</b>
Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT.
Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`.
\code
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer buf;
VmaAllocation alloc;
VmaAllocationInfo allocInfo;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
...
memcpy(allocInfo.pMappedData, myData, myDataSize);
\endcode
<b>Also consider:</b>
You can map the allocation using vmaMapMemory() or you can create it as persistenly mapped
using #VMA_ALLOCATION_CREATE_MAPPED_BIT, as in the example above.
\section usage_patterns_readback Readback
<b>When:</b>
Buffers for data written by or transferred from the GPU that you want to read back on the CPU,
e.g. results of some computations.
<b>What to do:</b>
Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
\code
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT |
VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer buf;
VmaAllocation alloc;
VmaAllocationInfo allocInfo;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
...
const float* downloadedData = (const float*)allocInfo.pMappedData;
\endcode
\section usage_patterns_advanced_data_uploading Advanced data uploading
For resources that you frequently write on CPU via mapped pointer and
frequently read on GPU e.g. as a uniform buffer (also called "dynamic"), multiple options are possible:
-# Easiest solution is to have one copy of the resource in `HOST_VISIBLE` memory,
even if it means system RAM (not `DEVICE_LOCAL`) on systems with a discrete graphics card,
and make the device reach out to that resource directly.
- Reads performed by the device will then go through PCI Express bus.
The performance of this access may be limited, but it may be fine depending on the size
of this resource (whether it is small enough to quickly end up in GPU cache) and the sparsity
of access.
-# On systems with unified memory (e.g. AMD APU or Intel integrated graphics, mobile chips),
a memory type may be available that is both `HOST_VISIBLE` (available for mapping) and `DEVICE_LOCAL`
(fast to access from the GPU). Then, it is likely the best choice for such type of resource.
-# Systems with a discrete graphics card and separate video memory may or may not expose
a memory type that is both `HOST_VISIBLE` and `DEVICE_LOCAL`, also known as Base Address Register (BAR).
If they do, it represents a piece of VRAM (or entire VRAM, if ReBAR is enabled in the motherboard BIOS)
that is available to CPU for mapping.
- Writes performed by the host to that memory go through PCI Express bus.
The performance of these writes may be limited, but it may be fine, especially on PCIe 4.0,
as long as rules of using uncached and write-combined memory are followed - only sequential writes and no reads.
-# Finally, you may need or prefer to create a separate copy of the resource in `DEVICE_LOCAL` memory,
a separate "staging" copy in `HOST_VISIBLE` memory and perform an explicit transfer command between them.
Thankfully, VMA offers an aid to create and use such resources in the the way optimal
for the current Vulkan device. To help the library make the best choice,
use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT together with
#VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT.
It will then prefer a memory type that is both `DEVICE_LOCAL` and `HOST_VISIBLE` (integrated memory or BAR),
but if no such memory type is available or allocation from it fails
(PC graphics cards have only 256 MB of BAR by default, unless ReBAR is supported and enabled in BIOS),
it will fall back to `DEVICE_LOCAL` memory for fast GPU access.
It is then up to you to detect that the allocation ended up in a memory type that is not `HOST_VISIBLE`,
so you need to create another "staging" allocation and perform explicit transfers.
\code
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT |
VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer buf;
VmaAllocation alloc;
VmaAllocationInfo allocInfo;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
VkMemoryPropertyFlags memPropFlags;
vmaGetAllocationMemoryProperties(allocator, alloc, &memPropFlags);
if(memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
{
// Allocation ended up in a mappable memory and is already mapped - write to it directly.
// [Executed in runtime]:
memcpy(allocInfo.pMappedData, myData, myDataSize);
}
else
{
// Allocation ended up in a non-mappable memory - need to transfer.
VkBufferCreateInfo stagingBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
stagingBufCreateInfo.size = 65536;
stagingBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo stagingAllocCreateInfo = {};
stagingAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
stagingAllocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
VMA_ALLOCATION_CREATE_MAPPED_BIT;
VkBuffer stagingBuf;
VmaAllocation stagingAlloc;
VmaAllocationInfo stagingAllocInfo;
vmaCreateBuffer(allocator, &stagingBufCreateInfo, &stagingAllocCreateInfo,
&stagingBuf, &stagingAlloc, stagingAllocInfo);
// [Executed in runtime]:
memcpy(stagingAllocInfo.pMappedData, myData, myDataSize);
vmaFlushAllocation(allocator, stagingAlloc, 0, VK_WHOLE_SIZE);
//vkCmdPipelineBarrier: VK_ACCESS_HOST_WRITE_BIT --> VK_ACCESS_TRANSFER_READ_BIT
VkBufferCopy bufCopy = {
0, // srcOffset
0, // dstOffset,
myDataSize); // size
vkCmdCopyBuffer(cmdBuf, stagingBuf, buf, 1, &bufCopy);
}
\endcode
\section usage_patterns_other_use_cases Other use cases
Here are some other, less obvious use cases and their recommended settings:
- An image that is used only as transfer source and destination, but it should stay on the device,
as it is used to temporarily store a copy of some texture, e.g. from the current to the next frame,
for temporal antialiasing or other temporal effects.
- Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
- Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO
- An image that is used only as transfer source and destination, but it should be placed
in the system RAM despite it doesn't need to be mapped, because it serves as a "swap" copy to evict
least recently used textures from VRAM.
- Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
- Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_HOST,
as VMA needs a hint here to differentiate from the previous case.
- A buffer that you want to map and write from the CPU, directly read from the GPU
(e.g. as a uniform or vertex buffer), but you have a clear preference to place it in device or
host memory due to its large size.
- Use `VkBufferCreateInfo::usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT`
- Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST
- Use VmaAllocationCreateInfo::flags = #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT
\page configuration Configuration
Please check "CONFIGURATION SECTION" in the code to find macros that you can define
before each include of this file or change directly in this file to provide
your own implementation of basic facilities like assert, `min()` and `max()` functions,
mutex, atomic etc.
The library uses its own implementation of containers by default, but you can switch to using
STL containers instead.
For example, define `VMA_ASSERT(expr)` before including the library to provide
custom implementation of the assertion, compatible with your project.
By default it is defined to standard C `assert(expr)` in `_DEBUG` configuration
and empty otherwise.
\section config_Vulkan_functions Pointers to Vulkan functions
There are multiple ways to import pointers to Vulkan functions in the library.
In the simplest case you don't need to do anything.
If the compilation or linking of your program or the initialization of the #VmaAllocator
doesn't work for you, you can try to reconfigure it.
First, the allocator tries to fetch pointers to Vulkan functions linked statically,
like this:
\code
m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
\endcode
If you want to disable this feature, set configuration macro: `#define VMA_STATIC_VULKAN_FUNCTIONS 0`.
Second, you can provide the pointers yourself by setting member VmaAllocatorCreateInfo::pVulkanFunctions.
You can fetch them e.g. using functions `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` or
by using a helper library like [volk](https://github.com/zeux/volk).
Third, VMA tries to fetch remaining pointers that are still null by calling
`vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` on its own.
You need to only fill in VmaVulkanFunctions::vkGetInstanceProcAddr and VmaVulkanFunctions::vkGetDeviceProcAddr.
Other pointers will be fetched automatically.
If you want to disable this feature, set configuration macro: `#define VMA_DYNAMIC_VULKAN_FUNCTIONS 0`.
Finally, all the function pointers required by the library (considering selected
Vulkan version and enabled extensions) are checked with `VMA_ASSERT` if they are not null.
\section custom_memory_allocator Custom host memory allocator
If you use custom allocator for CPU memory rather than default operator `new`
and `delete` from C++, you can make this library using your allocator as well
by filling optional member VmaAllocatorCreateInfo::pAllocationCallbacks. These
functions will be passed to Vulkan, as well as used by the library itself to
make any CPU-side allocations.
\section allocation_callbacks Device memory allocation callbacks
The library makes calls to `vkAllocateMemory()` and `vkFreeMemory()` internally.
You can setup callbacks to be informed about these calls, e.g. for the purpose
of gathering some statistics. To do it, fill optional member
VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
\section heap_memory_limit Device heap memory limit
When device memory of certain heap runs out of free space, new allocations may
fail (returning error code) or they may succeed, silently pushing some existing_
memory blocks from GPU VRAM to system RAM (which degrades performance). This
behavior is implementation-dependent - it depends on GPU vendor and graphics
driver.
On AMD cards it can be controlled while creating Vulkan device object by using
VK_AMD_memory_overallocation_behavior extension, if available.
Alternatively, if you want to test how your program behaves with limited amount of Vulkan device
memory available without switching your graphics card to one that really has
smaller VRAM, you can use a feature of this library intended for this purpose.
To do it, fill optional member VmaAllocatorCreateInfo::pHeapSizeLimit.
\page vk_khr_dedicated_allocation VK_KHR_dedicated_allocation
VK_KHR_dedicated_allocation is a Vulkan extension which can be used to improve
performance on some GPUs. It augments Vulkan API with possibility to query
driver whether it prefers particular buffer or image to have its own, dedicated
allocation (separate `VkDeviceMemory` block) for better efficiency - to be able
to do some internal optimizations. The extension is supported by this library.
It will be used automatically when enabled.
It has been promoted to core Vulkan 1.1, so if you use eligible Vulkan version
and inform VMA about it by setting VmaAllocatorCreateInfo::vulkanApiVersion,
you are all set.
Otherwise, if you want to use it as an extension:
1 . When creating Vulkan device, check if following 2 device extensions are
supported (call `vkEnumerateDeviceExtensionProperties()`).
If yes, enable them (fill `VkDeviceCreateInfo::ppEnabledExtensionNames`).
- VK_KHR_get_memory_requirements2
- VK_KHR_dedicated_allocation
If you enabled these extensions:
2 . Use #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag when creating
your #VmaAllocator to inform the library that you enabled required extensions
and you want the library to use them.
\code
allocatorInfo.flags |= VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT;
vmaCreateAllocator(&allocatorInfo, &allocator);
\endcode
That is all. The extension will be automatically used whenever you create a
buffer using vmaCreateBuffer() or image using vmaCreateImage().
When using the extension together with Vulkan Validation Layer, you will receive
warnings like this:
_vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer._
It is OK, you should just ignore it. It happens because you use function
`vkGetBufferMemoryRequirements2KHR()` instead of standard
`vkGetBufferMemoryRequirements()`, while the validation layer seems to be
unaware of it.
To learn more about this extension, see:
- [VK_KHR_dedicated_allocation in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap50.html#VK_KHR_dedicated_allocation)
- [VK_KHR_dedicated_allocation unofficial manual](http://asawicki.info/articles/VK_KHR_dedicated_allocation.php5)
\page vk_ext_memory_priority VK_EXT_memory_priority
VK_EXT_memory_priority is a device extension that allows to pass additional "priority"
value to Vulkan memory allocations that the implementation may use prefer certain
buffers and images that are critical for performance to stay in device-local memory
in cases when the memory is over-subscribed, while some others may be moved to the system memory.
VMA offers convenient usage of this extension.
If you enable it, you can pass "priority" parameter when creating allocations or custom pools
and the library automatically passes the value to Vulkan using this extension.
If you want to use this extension in connection with VMA, follow these steps:
\section vk_ext_memory_priority_initialization Initialization
1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_EXT_memory_priority".
2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
Attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
Check if the device feature is really supported - check if `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority` is true.
3) While creating device with `vkCreateDevice`, enable this extension - add "VK_EXT_memory_priority"
to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
Enable this device feature - attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to
`VkPhysicalDeviceFeatures2::pNext` chain and set its member `memoryPriority` to `VK_TRUE`.
5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
to VmaAllocatorCreateInfo::flags.
\section vk_ext_memory_priority_usage Usage
When using this extension, you should initialize following member:
- VmaAllocationCreateInfo::priority when creating a dedicated allocation with #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
- VmaPoolCreateInfo::priority when creating a custom pool.
It should be a floating-point value between `0.0f` and `1.0f`, where recommended default is `0.5f`.
Memory allocated with higher value can be treated by the Vulkan implementation as higher priority
and so it can have lower chances of being pushed out to system memory, experiencing degraded performance.
It might be a good idea to create performance-critical resources like color-attachment or depth-stencil images
as dedicated and set high priority to them. For example:
\code
VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
imgCreateInfo.extent.width = 3840;
imgCreateInfo.extent.height = 2160;
imgCreateInfo.extent.depth = 1;
imgCreateInfo.mipLevels = 1;
imgCreateInfo.arrayLayers = 1;
imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
allocCreateInfo.priority = 1.0f;
VkImage img;
VmaAllocation alloc;
vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
\endcode
`priority` member is ignored in the following situations:
- Allocations created in custom pools: They inherit the priority, along with all other allocation parameters
from the parametrs passed in #VmaPoolCreateInfo when the pool was created.
- Allocations created in default pools: They inherit the priority from the parameters
VMA used when creating default pools, which means `priority == 0.5f`.
\page vk_amd_device_coherent_memory VK_AMD_device_coherent_memory
VK_AMD_device_coherent_memory is a device extension that enables access to
additional memory types with `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and
`VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flag. It is useful mostly for
allocation of buffers intended for writing "breadcrumb markers" in between passes
or draw calls, which in turn are useful for debugging GPU crash/hang/TDR cases.
When the extension is available but has not been enabled, Vulkan physical device
still exposes those memory types, but their usage is forbidden. VMA automatically
takes care of that - it returns `VK_ERROR_FEATURE_NOT_PRESENT` when an attempt
to allocate memory of such type is made.
If you want to use this extension in connection with VMA, follow these steps:
\section vk_amd_device_coherent_memory_initialization Initialization
1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_AMD_device_coherent_memory".
2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
Attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
Check if the device feature is really supported - check if `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true.
3) While creating device with `vkCreateDevice`, enable this extension - add "VK_AMD_device_coherent_memory"
to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
Enable this device feature - attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to
`VkPhysicalDeviceFeatures2::pNext` and set its member `deviceCoherentMemory` to `VK_TRUE`.
5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
to VmaAllocatorCreateInfo::flags.
\section vk_amd_device_coherent_memory_usage Usage
After following steps described above, you can create VMA allocations and custom pools
out of the special `DEVICE_COHERENT` and `DEVICE_UNCACHED` memory types on eligible
devices. There are multiple ways to do it, for example:
- You can request or prefer to allocate out of such memory types by adding
`VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` to VmaAllocationCreateInfo::requiredFlags
or VmaAllocationCreateInfo::preferredFlags. Those flags can be freely mixed with
other ways of \ref choosing_memory_type, like setting VmaAllocationCreateInfo::usage.
- If you manually found memory type index to use for this purpose, force allocation
from this specific index by setting VmaAllocationCreateInfo::memoryTypeBits `= 1u << index`.
\section vk_amd_device_coherent_memory_more_information More information
To learn more about this extension, see [VK_AMD_device_coherent_memory in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_AMD_device_coherent_memory.html)
Example use of this extension can be found in the code of the sample and test suite
accompanying this library.
\page enabling_buffer_device_address Enabling buffer device address
Device extension VK_KHR_buffer_device_address
allow to fetch raw GPU pointer to a buffer and pass it for usage in a shader code.
It has been promoted to core Vulkan 1.2.
If you want to use this feature in connection with VMA, follow these steps:
\section enabling_buffer_device_address_initialization Initialization
1) (For Vulkan version < 1.2) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
Check if the extension is supported - if returned array of `VkExtensionProperties` contains
"VK_KHR_buffer_device_address".
2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
Attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
Check if the device feature is really supported - check if `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress` is true.
3) (For Vulkan version < 1.2) While creating device with `vkCreateDevice`, enable this extension - add
"VK_KHR_buffer_device_address" to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
Enable this device feature - attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to
`VkPhysicalDeviceFeatures2::pNext` and set its member `bufferDeviceAddress` to `VK_TRUE`.
5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
have enabled this feature - add #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
to VmaAllocatorCreateInfo::flags.
\section enabling_buffer_device_address_usage Usage
After following steps described above, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*` using VMA.
The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT*` to
allocated memory blocks wherever it might be needed.
Please note that the library supports only `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*`.
The second part of this functionality related to "capture and replay" is not supported,
as it is intended for usage in debugging tools like RenderDoc, not in everyday Vulkan usage.
\section enabling_buffer_device_address_more_information More information
To learn more about this extension, see [VK_KHR_buffer_device_address in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap46.html#VK_KHR_buffer_device_address)
Example use of this extension can be found in the code of the sample and test suite
accompanying this library.
\page general_considerations General considerations
\section general_considerations_thread_safety Thread safety
- The library has no global state, so separate #VmaAllocator objects can be used
independently.
There should be no need to create multiple such objects though - one per `VkDevice` is enough.
- By default, all calls to functions that take #VmaAllocator as first parameter
are safe to call from multiple threads simultaneously because they are
synchronized internally when needed.
This includes allocation and deallocation from default memory pool, as well as custom #VmaPool.
- When the allocator is created with #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
flag, calls to functions that take such #VmaAllocator object must be
synchronized externally.
- Access to a #VmaAllocation object must be externally synchronized. For example,
you must not call vmaGetAllocationInfo() and vmaMapMemory() from different
threads at the same time if you pass the same #VmaAllocation object to these
functions.
- #VmaVirtualBlock is not safe to be used from multiple threads simultaneously.
\section general_considerations_versioning_and_compatibility Versioning and compatibility
The library uses [**Semantic Versioning**](https://semver.org/),
which means version numbers follow convention: Major.Minor.Patch (e.g. 2.3.0), where:
- Incremented Patch version means a release is backward- and forward-compatible,
introducing only some internal improvements, bug fixes, optimizations etc.
or changes that are out of scope of the official API described in this documentation.
- Incremented Minor version means a release is backward-compatible,
so existing code that uses the library should continue to work, while some new
symbols could have been added: new structures, functions, new values in existing
enums and bit flags, new structure members, but not new function parameters.
- Incrementing Major version means a release could break some backward compatibility.
All changes between official releases are documented in file "CHANGELOG.md".
\warning Backward compatibility is considered on the level of C++ source code, not binary linkage.
Adding new members to existing structures is treated as backward compatible if initializing
the new members to binary zero results in the old behavior.
You should always fully initialize all library structures to zeros and not rely on their
exact binary size.
\section general_considerations_validation_layer_warnings Validation layer warnings
When using this library, you can meet following types of warnings issued by
Vulkan validation layer. They don't necessarily indicate a bug, so you may need
to just ignore them.
- *vkBindBufferMemory(): Binding memory to buffer 0xeb8e4 but vkGetBufferMemoryRequirements() has not been called on that buffer.*
- It happens when VK_KHR_dedicated_allocation extension is enabled.
`vkGetBufferMemoryRequirements2KHR` function is used instead, while validation layer seems to be unaware of it.
- *Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.*
- It happens when you map a buffer or image, because the library maps entire
`VkDeviceMemory` block, where different types of images and buffers may end
up together, especially on GPUs with unified memory like Intel.
- *Non-linear image 0xebc91 is aliased with linear buffer 0xeb8e4 which may indicate a bug.*
- It may happen when you use [defragmentation](@ref defragmentation).
\section general_considerations_allocation_algorithm Allocation algorithm
The library uses following algorithm for allocation, in order:
-# Try to find free range of memory in existing blocks.
-# If failed, try to create a new block of `VkDeviceMemory`, with preferred block size.
-# If failed, try to create such block with size / 2, size / 4, size / 8.
-# If failed, try to allocate separate `VkDeviceMemory` for this allocation,
just like when you use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
-# If failed, choose other memory type that meets the requirements specified in
VmaAllocationCreateInfo and go to point 1.
-# If failed, return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
\section general_considerations_features_not_supported Features not supported
Features deliberately excluded from the scope of this library:
-# **Data transfer.** Uploading (streaming) and downloading data of buffers and images
between CPU and GPU memory and related synchronization is responsibility of the user.
Defining some "texture" object that would automatically stream its data from a
staging copy in CPU memory to GPU memory would rather be a feature of another,
higher-level library implemented on top of VMA.
VMA doesn't record any commands to a `VkCommandBuffer`. It just allocates memory.
-# **Recreation of buffers and images.** Although the library has functions for
buffer and image creation: vmaCreateBuffer(), vmaCreateImage(), you need to
recreate these objects yourself after defragmentation. That is because the big
structures `VkBufferCreateInfo`, `VkImageCreateInfo` are not stored in
#VmaAllocation object.
-# **Handling CPU memory allocation failures.** When dynamically creating small C++
objects in CPU memory (not Vulkan memory), allocation failures are not checked
and handled gracefully, because that would complicate code significantly and
is usually not needed in desktop PC applications anyway.
Success of an allocation is just checked with an assert.
-# **Code free of any compiler warnings.** Maintaining the library to compile and
work correctly on so many different platforms is hard enough. Being free of
any warnings, on any version of any compiler, is simply not feasible.
There are many preprocessor macros that make some variables unused, function parameters unreferenced,
or conditional expressions constant in some configurations.
The code of this library should not be bigger or more complicated just to silence these warnings.
It is recommended to disable such warnings instead.
-# This is a C++ library with C interface. **Bindings or ports to any other programming languages** are welcome as external projects but
are not going to be included into this repository.
*/
|